*** hggdh has quit IRC | 00:00 | |
*** troytoman has quit IRC | 00:03 | |
*** adiantum has quit IRC | 00:10 | |
*** adiantum has joined #openstack | 00:15 | |
dragondm | btw, having the CLOUDSERVERS vars in novarc is nice, but there is a bug. CLOUD_SERVERS_URL has the wrong port | 00:20 |
---|---|---|
dragondm | oops, wrong channel. | 00:21 |
*** phymata has quit IRC | 00:22 | |
rlucio | dubsquared1: did you log a bug for the vm launch failure you saw earlier? | 00:24 |
rlucio | dubsquared1: b/c i am getting the same issue now too | 00:24 |
uvirtbot | New bug: #701731 in nova "icmp rules created with euca-authorize improperly use port-range for icmp type/code" [Undecided,New] https://launchpad.net/bugs/701731 | 00:31 |
*** kashyapc has quit IRC | 00:34 | |
*** rnirmal has joined #openstack | 00:36 | |
*** jt_zg has joined #openstack | 00:44 | |
*** sophiap has joined #openstack | 00:44 | |
jt_zg | Hey all, I was wondering where I can find the api key that cyberduck is asking for | 00:44 |
*** jdurgin has quit IRC | 00:44 | |
*** rlucio has quit IRC | 00:45 | |
vishy | soren: about the iscsi, it seems safe to add the rule to iscsi directly if we use the full targetname | 00:46 |
vishy | for example: iqn.2010-10.org.openstack:volume-00000001 | 00:46 |
colinnich | jt_zg: username is your account and username combined ie account:username and api key is your password | 00:46 |
jt_zg | thanks colinnich I'll try that | 00:47 |
jt_zg | colinnich, that worked! Thanks | 00:47 |
colinnich | jt_zg: cool, no problem | 00:48 |
uvirtbot | New bug: #701734 in nova "vm launch fails if security-group chain file already exists" [Undecided,New] https://launchpad.net/bugs/701734 | 00:51 |
jt_zg | colinnich, Cyberduck seems to hang when connecting/listing directories. Is this common? | 00:52 |
*** Ryan_Lane has quit IRC | 00:52 | |
*** kashyapc has joined #openstack | 00:53 | |
vishy | soren: without iscsidev.sh finding the actual sd* device is a little tough | 00:53 |
colinnich | jt_zg: It worked for me last time I tried it, but I'm now using swauth for authentation and cyberduck doesn't seem to be compatible | 00:55 |
jt_zg | colinnich, thanks for letting me know. Any recommendations for 'pretty front-ends' to demo Swift to my boss? | 00:56 |
colinnich | jt_zg: Not really, no - cyberduck is the only one I know of. And I wouldn't say it was pretty :-) | 00:57 |
jt_zg | colinnich, fair :P I was being generous | 00:57 |
jt_zg | So, its safe to say I'm writing an API if I want to get the most bang for my buck with Swift? | 00:57 |
colinnich | jt_zg: if you are using it in-house, then yes probably. | 00:58 |
jt_zg | colinnich, that's fair. I guess I'd better get to work! | 00:59 |
colinnich | jt_zg: And I'd better get to bed, it's 1am | 01:00 |
jt_zg | colinnich, night! thanks for the help | 01:00 |
*** adiantum has quit IRC | 01:09 | |
colinnich | jt_zg: just thought of something... I take it cyberduck isn't on the same machine as swift? | 01:09 |
jt_zg | colinnich, right | 01:09 |
jt_zg | I have swift running on 5 storage, 1 auth, 1 proxy server(dedicated). I have cyberduck running in an XP VM on my desktop machine | 01:10 |
colinnich | jt_zg: have you changed the swift cluster url away from 127.0.0.1? | 01:10 |
colinnich | jt_zg: either the default or at least your account | 01:10 |
jt_zg | colinnich, hmm, let me check. Thats in the auth configs? | 01:10 |
jt_zg | colinnich, Hmm, it seems to be pointing to an internal IP | 01:11 |
colinnich | jt_zg: the default it. To find out what your account is, you could look in /etc/swift/auth.db | 01:11 |
colinnich | jt_zg: that would explain cyberduck hanging it it couldn't connect to that ip | 01:11 |
jt_zg | colinnich, certainly does. I'll switch it to the external IP | 01:12 |
colinnich | jt_zg: good luck, I'm definitely off to bed now | 01:12 |
jt_zg | colinnich, I believe you :P | 01:12 |
jt_zg | thanks again | 01:12 |
*** dfg_ has quit IRC | 01:12 | |
colinnich | jt_zg: np | 01:12 |
*** adiantum has joined #openstack | 01:15 | |
*** ccustine has quit IRC | 01:17 | |
*** Ryan_Lane has joined #openstack | 01:19 | |
*** ehazlett has joined #openstack | 01:21 | |
ehazlett | greetings... i'm following the novainstall doc on virtualbox -- when launching the instance i see it running but get "No route to host" when trying to ssh... any ideas? | 01:22 |
*** sophiap has quit IRC | 01:33 | |
openstackhudson | Project nova build #387: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/387/ | 01:33 |
openstackhudson | Tarmac: Fixes bug #701575: run_tests.sh fails with a meaningless error if virtualenv is not installed. Proposed fix tries to use easy_install to install virtualenv if not present. | 01:33 |
uvirtbot | Launchpad bug 701575 in nova "install_venv.py does not install virtualenv if missing" [Undecided,Fix committed] https://launchpad.net/bugs/701575 | 01:33 |
openstackhudson | Test by doing "run_tests.sh -V" on a system that has easy_install installed but not virtualenv. | 01:33 |
*** sophiap has joined #openstack | 01:41 | |
xtoddx | rlane: around? | 01:43 |
uvirtbot | New bug: #701749 in nova "volume creation doesn't recover from failure well" [Medium,Triaged] https://launchpad.net/bugs/701749 | 01:46 |
*** adiantum has quit IRC | 01:46 | |
uvirtbot | New bug: #701748 in nova "nova-volume is too hard to set up" [Low,In progress] https://launchpad.net/bugs/701748 | 01:47 |
*** adiantum has joined #openstack | 01:52 | |
*** joearnold has quit IRC | 01:55 | |
*** dragondm has quit IRC | 01:57 | |
*** ehazlett has quit IRC | 02:03 | |
openstackhudson | Project nova build #388: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/388/ | 02:03 |
openstackhudson | Tarmac: Changing DN creation to do searches for entries. | 02:03 |
openstackhudson | This change adds additional interoperability (as many directory servers and LDAP admins use cn, or another attribute, as the naming attribute). DN creation will incur a slight performance penalty for doing so, as DNs must be searched for now. User and project creation skip this performance penalty, as there is no need to search for an entry that is being created. | 02:03 |
*** adiantum has quit IRC | 02:03 | |
*** dirakx has joined #openstack | 02:04 | |
*** adiantum has joined #openstack | 02:09 | |
*** schisamo has quit IRC | 02:11 | |
*** Cybo has quit IRC | 02:13 | |
creiht | jt_zg: It also depends on your use case | 02:27 |
jt_zg | creiht, sorry, what does? | 02:27 |
creiht | If you are doing mainly backup there are tools that can integrate with swift (like duplicity) | 02:28 |
creiht | in reference to weather or not you are going to need to write something to the api | 02:28 |
jt_zg | Makes sense. I think we're going to be using Mezeo as a partner but we also want to go the EC2 bucket route also | 02:29 |
*** mray has quit IRC | 02:33 | |
*** adiantum has quit IRC | 02:34 | |
*** pvo has quit IRC | 02:38 | |
*** Jordandev has joined #openstack | 02:38 | |
*** adiantum has joined #openstack | 02:39 | |
*** reldan has joined #openstack | 02:43 | |
*** pvo_away has joined #openstack | 03:00 | |
*** pvo_away has quit IRC | 03:03 | |
*** pvo_away has joined #openstack | 03:03 | |
*** pvo_away has quit IRC | 03:07 | |
*** pvo_away has joined #openstack | 03:08 | |
*** pvo_away has quit IRC | 03:08 | |
*** pvo_away has joined #openstack | 03:15 | |
*** pvo_away has joined #openstack | 03:16 | |
*** pvo_away is now known as pvo | 03:16 | |
*** pvo is now known as pvo_away | 03:17 | |
*** pvo_away is now known as pvo | 03:17 | |
*** pvo has joined #openstack | 03:18 | |
*** ChanServ sets mode: +v pvo | 03:18 | |
*** pvo is now known as pvo_away | 03:19 | |
*** maple_bed has joined #openstack | 03:21 | |
*** lorinh1 has joined #openstack | 03:21 | |
*** maple_bed has quit IRC | 03:21 | |
*** lorinh1 has left #openstack | 03:22 | |
*** maplebed has quit IRC | 03:23 | |
*** pvo_away is now known as pvo | 03:25 | |
*** pvo has quit IRC | 03:27 | |
*** adiantum has quit IRC | 03:34 | |
*** adiantum has joined #openstack | 03:39 | |
*** sophiap has quit IRC | 03:42 | |
*** pvo_away has joined #openstack | 03:44 | |
*** pvo_away is now known as pvo | 03:44 | |
*** pvo has joined #openstack | 03:44 | |
*** ChanServ sets mode: +v pvo | 03:44 | |
*** daleolds has joined #openstack | 03:50 | |
*** sophiap has joined #openstack | 03:55 | |
*** lorin1 has joined #openstack | 03:59 | |
*** lorin1 has quit IRC | 04:01 | |
*** lorin1 has joined #openstack | 04:01 | |
*** lorin1 has quit IRC | 04:02 | |
*** adiantum has quit IRC | 04:03 | |
*** sophiap has quit IRC | 04:04 | |
*** kashyapc has quit IRC | 04:07 | |
*** reldan has quit IRC | 04:08 | |
*** adiantum has joined #openstack | 04:08 | |
*** rnirmal has quit IRC | 04:20 | |
*** pvo is now known as pvo_away | 04:26 | |
*** ramkrsna has joined #openstack | 04:26 | |
*** ramkrsna has joined #openstack | 04:26 | |
jt_zg | How do you accomplish data spanning with Swift, over several servers. I.e., you have 2 servers. Each has capacity for 10Tb but you have 11Tb of information. Do the containers handle that automagically? | 04:35 |
jt_zg | Or are you limited by the largest continuous disk size on a server? | 04:36 |
jeremyb | jt_zg: 1st of all you never just have 2 servers | 04:43 |
jt_zg | jeremyb, I understand that | 04:43 |
jeremyb | the recommended minimum cluster size is 5 zones with 2 servers each | 04:43 |
jeremyb | err | 04:43 |
jeremyb | 1 server each | 04:43 |
jeremyb | so 5 servers total | 04:44 |
jt_zg | jeremyb, wasn't really my point. What I mean is, what if you have a cluster, but that cluster isn't large enough? | 04:44 |
jt_zg | can you span data to another cluster? | 04:44 |
jeremyb | it will fill up? | 04:44 |
jeremyb | there's a finite max limit of what a given swift cluster can store | 04:44 |
jeremyb | there's a configurable max size for an individual object | 04:44 |
jeremyb | you can't have 11tb with the default of 5GB (and no one changes it) | 04:45 |
jt_zg | that's not my question... | 04:45 |
jeremyb | each object is stored with 3 complete copies so you need at least 3x the original size | 04:45 |
*** kashyapc has joined #openstack | 04:46 | |
jeremyb | well please clarify then | 04:46 |
jt_zg | Imagine I have an 10Tb server. I fill it up with one client. That client has 11 Tb of data. can I stretch the container to another set of servers? | 04:46 |
jeremyb | ? | 04:47 |
jeremyb | a 10tb server or 10tb cluster? | 04:47 |
jt_zg | 10tb server. 10tb cluster of 3 storage nodes. I want to incorporate another set of 3 storage nodes to make up for that spare 1tb | 04:47 |
jt_zg | In essence, stretching and increasing that clients container to grow beyond the first set of storage bricks, on to another set to grow their container | 04:49 |
jeremyb | i still don't understand the scenario | 04:50 |
jeremyb | you'd probably just add them to the cluster so you'd end up with a bigger cluster | 04:50 |
notmyname | jt_zg: essentially, what you are talking about is how to expand a logical cluster | 04:52 |
jt_zg | notmyname, exactly | 04:52 |
notmyname | a swift cluster is defined by the ring(s) | 04:52 |
notmyname | so add servers to the appropriate ring, and you have more storage | 04:52 |
jt_zg | oh, that's pretty easy | 04:53 |
notmyname | you can either add to existing zones or add new zones | 04:53 |
jeremyb | but then the rings have to be pushed to all nodes | 04:53 |
notmyname | yes | 04:53 |
notmyname | but that's as complicated as rsync | 04:53 |
* jeremyb cna't remember if there's a built in tool to do it | 04:53 | |
jt_zg | so you just add other servers to a ring with a different mount device and the magic happens? | 04:53 |
notmyname | essentially. replication will handle moving the appropriate partitions to the new servers | 04:54 |
jeremyb | is it a big deal if the rings are out of sync a little? what if it takes 5 mins to get to all the machines? | 04:54 |
jt_zg | notmyname, very cool. I wasn't sure if it was possible. And wanted to confirm before deploying more hardware :P | 04:54 |
notmyname | heh, we do it all the time with cloud files | 04:54 |
jt_zg | notmyname, thanks, as always | 04:55 |
*** ramkrsna has quit IRC | 04:55 | |
jeremyb | is cloud files one huge cluster? | 04:55 |
notmyname | per DC | 04:55 |
jeremyb | can you say how big? | 04:55 |
notmyname | we have one cluster per DC | 04:55 |
jeremyb | huh | 04:55 |
pandemicsyn | jeremyb: no built in tool, we build the rings in a directory served out by nginx | 04:55 |
notmyname | I'm not really supposed to, as far as I know :-) | 04:55 |
jeremyb | does it have multi region yet? | 04:55 |
jt_zg | We tested Gluster. Data spanning...not a futile task :S | 04:56 |
pandemicsyn | then we just dsh a script on the nodes that wgets the rings and verify's checksums and stuff | 04:56 |
jt_zg | Swift...makes it a trivial task with rings apparently | 04:56 |
jeremyb | pandemicsyn: ohhhh, you can make them only editable from one node... | 04:56 |
notmyname | pandemicsyn: am I right in thinking that one should keep zones balanced. if you have a few servers, add to existing zones. if you have a bunch of servers, add a new zone | 04:57 |
pandemicsyn | yea, its better if they're similar | 04:57 |
*** dirakx has quit IRC | 04:58 | |
notmyname | jt_zg: ^ that | 04:58 |
pandemicsyn | if you have a zone thats significantly smaller you could have it at a lower weight in the ring though i guess | 04:58 |
pandemicsyn | that would just help traffic wise though i guess | 04:58 |
jeremyb | so what about the window where some nodes have the new rings and some have old? | 04:59 |
notmyname | jeremyb: it would be handled by each side (old and new) the same way an object failure would | 04:59 |
jeremyb | as in 404? | 04:59 |
notmyname | so the servers with the old ring would try to push to one server, and replication will eventually move it to the right place | 05:00 |
jeremyb | for GETS | 05:00 |
jeremyb | err, GETs* | 05:00 |
notmyname | no, it should be a 2xx response, but you may not be able to read the writes immediately | 05:00 |
notmyname | the proxy may have a new ring that doesn't know where the data is until replication moves it | 05:01 |
jeremyb | i mean for something written days earlier | 05:01 |
jeremyb | when i GET, how does it find it if it has the new ring? | 05:01 |
notmyname | it checks the mtime on the ring file and reloads it if necessary, but the ring-builder tool doesn't allow you to shoot yourself in the foot that much. it prevents things from getting too out of sync. gholt wrote that code. he would be the expert on it | 05:02 |
notmyname | pandemicsyn: do you remember those ring-builder options? | 05:03 |
pandemicsyn | sorry, R.app just locked my mac up | 05:04 |
*** trin_cz has quit IRC | 05:04 | |
pandemicsyn | notmyname: which ringer builder option ? the freeze/lockout time or whatever ? | 05:04 |
jeremyb | i guess that's a rackfiles client not a GNU R IDE? :) | 05:04 |
pandemicsyn | jeremyb: lol no its the R ide | 05:05 |
* pandemicsyn has a thing for graphs and stats | 05:05 | |
*** ivan has quit IRC | 05:05 | |
jeremyb | hrmmm | 05:06 |
notmyname | pandemicsyn: ya, I think I found it in the docstrings | 05:06 |
creiht | When you make changes to the ring, it will only allow you to move one replica of a partition | 05:06 |
pandemicsyn | just run "ring-builder" without args | 05:07 |
creiht | this allows the data to be still available while things are moving around | 05:07 |
pandemicsyn | its the "min_part_hours" part you want | 05:07 |
notmyname | jeremyb: run swift-ring-builder with no options and you will see a bunch of text on how that works (not moving one replica of a partition, etc) | 05:07 |
jeremyb | creiht: but many partitions at a time? | 05:07 |
creiht | a minimal number | 05:07 |
jeremyb | (partitions are the thing i have like 10,000 of, right?) | 05:07 |
creiht | usually millions | 05:08 |
jeremyb | i was going to do 500k | 05:08 |
jeremyb | i'm having trouble imagining more than 50 spindles | 05:08 |
creiht | jeremyb: how big do you plan on your cluster getting? | 05:08 |
creiht | jeremyb: are you positive about that? | 05:09 |
creiht | You can almost get 50 spindles on one machine :) | 05:10 |
jeremyb | heh | 05:10 |
jeremyb | http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/ :P | 05:10 |
*** ivan has joined #openstack | 05:11 | |
creiht | heh | 05:11 |
creiht | not the backblaze server again :) | 05:11 |
pandemicsyn | creiht: usb port replicators FTW | 05:11 |
creiht | lol | 05:11 |
creiht | jeremyb: what size spindles? | 05:12 |
creiht | 2T, 1.5T? | 05:12 |
*** joearnold has joined #openstack | 05:12 | |
creiht | if they are 2T then that maxes out at 33T user storage | 05:13 |
jeremyb | 30.2TB | 05:14 |
creiht | hehe | 05:14 |
creiht | yes | 05:14 |
creiht | :) | 05:14 |
creiht | k | 05:14 |
notmyname | assuming perfect distribution and no buffer :-) | 05:14 |
creiht | jeremyb: if that is all you need, then that will be more than plenty partitions | 05:15 |
creiht | actually probably too many :) | 05:16 |
jeremyb | creiht: current size (on one big ext3) is ~8TB. let's say 300TB max size (i was originally thinking 25TB but you got me thinking bigger). that's ~220 spindles at 1.5TB each. (some 1TB, some 2TB, maybe some 1.5TB) | 05:16 |
notmyname | isn't the "goal" to have about 100 partitions on each node (assuming uniform nodes) when the cluster is full. this gives you control of about 1% of the space per partition | 05:16 |
jeremyb | how hard is it to change partition count later? | 05:17 |
notmyname | impossible | 05:17 |
creiht | notmyname: A rough guideline is to have a minimum of 100 partitions per device at the max cluster size that you might get to | 05:17 |
creiht | not impossible, but lets pretend it is :) | 05:17 |
notmyname | ya, that's what I thought I remembered | 05:17 |
notmyname | heh | 05:17 |
jeremyb | so, node is spindle not host, right? | 05:18 |
notmyname | jeremyb: ok, not impossible, but it would require lots of downtime and code that isn't yet written | 05:18 |
jeremyb | sure | 05:18 |
creiht | jeremyb: in the ring, that is correct | 05:18 |
notmyname | node == spindle == drive | 05:18 |
creiht | and we aren't entirely sure what notmyname is suggesting would even work reasonbly :) | 05:19 |
notmyname | I'm not suggesting it! just being optimistic :-) | 05:19 |
creiht | hehe | 05:19 |
*** joearnold has quit IRC | 05:19 | |
jeremyb | i'm sorry, make that 660 spindles (forgot to x3) | 05:19 |
creiht | hehe | 05:19 |
creiht | alright lets round off to 700 | 05:20 |
creiht | :) | 05:20 |
jeremyb | 660 vs. my original plan of 500k is pretty good (original plan made while trying to sleep a month ago, don't remember what variables i used and it's not recorded anywhere) | 05:20 |
creiht | so if you figured that 700 was that absulute max, then you want at least 70k partitions (700*100) | 05:21 |
creiht | hehe | 05:21 |
jeremyb | oh, i'm multiplying by 1k not 100... | 05:21 |
jeremyb | ok, so i think i'll just go with 500k (5k spindles) and forget about it | 05:22 |
notmyname | you just increased your cluster size by an order of magnitude... | 05:22 |
creiht | give yourself a little headroom and 2**17 is 131K parts, or a little more with 2**18 for 262K parts | 05:23 |
creiht | 2**17 has a max recommend number of 1310 spindles | 05:23 |
jeremyb | yeah, maybe i'm making some irrational (or baseless) judgement about how expensive excess partitions are | 05:23 |
creiht | 2**18 has 2621 | 05:23 |
creiht | they aren't too bad, but you don't want to way over estimate | 05:24 |
notmyname | more partitions translates into more overhead for the system (data lost to fs metadata, time lost to creating/rebalancing rings, etc) | 05:24 |
creiht | so I would say based on the minimal amount of information that you have given me so far, 2**17 or 2*18 should be fine for you | 05:24 |
jeremyb | right | 05:25 |
jeremyb | i'm partly limited because i can't imagine having the budget to have so many spindles (even if we had a cluster filled to the brim) | 05:26 |
creiht | I should make a chart of the ring sizes, with the max spindles for each in the docs | 05:26 |
creiht | right | 05:26 |
*** burris has quit IRC | 05:27 | |
creiht | We estimaged ours based on a what point would we run out of physical space/network/power | 05:27 |
notmyname | creiht: http://robhirschfeld.com/2011/01/12/openstack-swift-demo-in-a-browser/ | 05:28 |
creiht | :) | 05:29 |
creiht | that sounds familiar :) | 05:29 |
creiht | though he just has the auth working there | 05:29 |
notmyname | I told myself that I wouldn't stay up past midnight working tonight | 05:34 |
* jeremyb too | 05:34 | |
jeremyb | ugh, why did i df? dead nfs wait! oh it's back | 05:35 |
* jeremyb has another question: what happens when one of the servers (proxy/container/storage/account) dies in the middle of a request? the client transparently retries? | 05:38 | |
jeremyb | or if something != proxy broke does the proxy retry? | 05:38 |
pandemicsyn | yep | 05:39 |
creiht | if the proxy dies, then the request dies | 05:39 |
creiht | if something dies behind the proxy, it will try to work around it if it can | 05:39 |
creiht | more or less | 05:39 |
creiht | :) | 05:39 |
creiht | it depends a lot on the request | 05:39 |
jeremyb | hrmm | 05:39 |
jeremyb | anyone considered making client libs retry? | 05:39 |
jeremyb | (configurably) | 05:40 |
creiht | jeremyb: swift/common/client.py has that | 05:40 |
creiht | :) | 05:40 |
jeremyb | creiht: oh, well it was part of the original question and it sounded like the answer was no | 05:41 |
jeremyb | anyway, thanks | 05:41 |
*** f4m8_ is now known as f4m8 | 05:42 | |
*** burris has joined #openstack | 05:42 | |
jeremyb | nacht! | 05:43 |
creiht | http://etherpad.openstack.org/SwiftRingCapacity | 05:46 |
creiht | If anyone finds that useful to get a rough idea of what the different ring sizings will give you | 05:47 |
*** hadrian has quit IRC | 05:53 | |
*** ramkrsna has joined #openstack | 05:59 | |
*** adiantum has quit IRC | 06:03 | |
jt_zg | creiht, thanks for that link! | 06:05 |
*** adiantum has joined #openstack | 06:09 | |
*** adiantum has quit IRC | 06:14 | |
*** jdurgin has joined #openstack | 06:14 | |
*** mray has joined #openstack | 06:16 | |
mtaylor | eday: ping | 06:17 |
*** adiantum has joined #openstack | 06:20 | |
*** ramkrsna has quit IRC | 06:21 | |
*** DubLo7 has quit IRC | 06:24 | |
*** DubLo7 has joined #openstack | 06:25 | |
*** ramkrsna has joined #openstack | 06:34 | |
*** kashyapc has quit IRC | 06:36 | |
*** kashyapc has joined #openstack | 06:40 | |
*** adiantum has quit IRC | 06:46 | |
*** adiantum has joined #openstack | 06:47 | |
*** daleolds has quit IRC | 06:49 | |
*** Jordandev has quit IRC | 07:08 | |
*** aimon has quit IRC | 07:11 | |
*** aimon has joined #openstack | 07:11 | |
*** winston-d has quit IRC | 07:15 | |
*** sandywalsh has quit IRC | 07:20 | |
*** jfluhmann_ has quit IRC | 07:21 | |
*** jdurgin has quit IRC | 07:32 | |
*** miclorb has quit IRC | 07:37 | |
*** ibarrera has joined #openstack | 07:43 | |
*** maplebed has joined #openstack | 07:54 | |
*** befreax has joined #openstack | 07:57 | |
*** rcc has joined #openstack | 07:58 | |
*** brd_from_italy has joined #openstack | 08:01 | |
*** adiantum has quit IRC | 08:03 | |
*** adiantum has joined #openstack | 08:09 | |
*** maplebed has quit IRC | 08:15 | |
*** adiantum has quit IRC | 08:19 | |
*** calavera has joined #openstack | 08:19 | |
*** adiantum has joined #openstack | 08:24 | |
*** adiantum has quit IRC | 08:36 | |
*** rcc has quit IRC | 08:41 | |
*** adiantum has joined #openstack | 08:41 | |
*** arthurc has joined #openstack | 08:48 | |
*** adiantum has quit IRC | 08:59 | |
*** MarkAtwood has joined #openstack | 09:00 | |
*** adiantum has joined #openstack | 09:01 | |
*** opengeard_ has joined #openstack | 09:10 | |
*** adiantum has quit IRC | 09:11 | |
*** opengeard_ has quit IRC | 09:11 | |
*** adiantum has joined #openstack | 09:16 | |
*** irahgel has joined #openstack | 09:23 | |
openstackhudson | Project nova build #389: SUCCESS in 1 min 24 sec: http://hudson.openstack.org/job/nova/389/ | 09:23 |
openstackhudson | Tarmac: This branch adds web based serial console access. Here is an overview of how it works (for libvirt): | 09:23 |
openstackhudson | 1. User requests an ajax console for an instance_id (either through OS api, or tools/euca-get-ajax-console) | 09:23 |
openstackhudson | a. api server calls compute worker to complete request | 09:23 |
openstackhudson | b. compute worker parses an instance's xml to locate its pseudo terminal (/dev/pts/x) | 09:23 |
openstackhudson | c. compute worker spawns an ajaxterm daemon, bound to a random port in a specified range. socat is used to connect to /dev/pts/x. Note that ajaxterm was modified in the following ways: | 09:23 |
openstackhudson | i. dies after 5 minutes of inactivity | 09:23 |
openstackhudson | ii. now requires token authentication. Previously it was trivial to hijack an ajaxterm | 09:23 |
openstackhudson | d. compute worker returns ajaxterm connect information to the api server: port, host, token | 09:23 |
openstackhudson | e. api server casts connect information to the nova-ajax-console-proxy (a new service) | 09:23 |
openstackhudson | f. api server returns a url for the ajaxterm (eg. http://nova-ajax-console-proxy/?token=123) | 09:23 |
openstackhudson | 2. User now has a url, and can paste it in a browser | 09:23 |
openstackhudson | a. Browser sends request to https://nova-ajax-console-proxy/?token=123 | 09:23 |
openstackhudson | b. nova-ajax-console-proxy maps token to connect information | 09:23 |
openstackhudson | c. nova-ajax-console-proxy constructs a proxy to the ajaxterm that is running on the host machine. This is now done with eventlet, though previously it was done using twisted | 09:23 |
openstackhudson | 3. User interacts with console through web browser | 09:23 |
openstackhudson | NOTE: For this to work as expected, serial console login must be enabled in the instance. Instructions for how to do this on ubuntu can be found here: https://help.ubuntu.com/community/SerialConsoleHowto. Note that you must actively log out of the serial console when you are finished, otherwise the console will remain open even after the ajaxterm term session has ended. | 09:23 |
openstackhudson | Also note that nova.sh has been modified in this branch to launch nova-ajax-console-proxy. | 09:23 |
ttx | yay | 09:23 |
*** littleidea has quit IRC | 09:31 | |
*** littleidea has joined #openstack | 09:34 | |
*** adiantum has quit IRC | 09:36 | |
*** trin_cz has joined #openstack | 09:36 | |
*** adiantum has joined #openstack | 09:41 | |
*** adiantum has quit IRC | 09:52 | |
*** MarkAtwood has quit IRC | 10:13 | |
*** tomo_bot_______5 has joined #openstack | 10:32 | |
*** tomo_bot_______4 has quit IRC | 10:32 | |
*** allsystemsarego has joined #openstack | 10:32 | |
*** tomo_bot_______6 has joined #openstack | 10:34 | |
*** alekibango has quit IRC | 10:35 | |
*** tomo_bot_______5 has quit IRC | 10:36 | |
*** BK_man has joined #openstack | 10:38 | |
soren | Weird. We have a couple of tests that fail with Python2.7 | 10:46 |
*** guigui has joined #openstack | 10:47 | |
soren | Err.. Occasionally fail with python 2.7 | 10:47 |
ttx | soren: what does it take to autoclose a bug on branch merge ? It seems to happen sometimes but not all the time | 10:48 |
soren | ttx: Example of where it didn't work= | 10:48 |
soren | ? | 10:48 |
ttx | https://bugs.launchpad.net/nova/+bug/681164 | 10:49 |
uvirtbot | Launchpad bug 681164 in nova "Use a search to find DNs instead of creating them directly from attributes" [Medium,Fix committed] | 10:49 |
ttx | I just closed it myself | 10:49 |
soren | ttx: I'm not sure. http://hudson.openstack.org/job/nova-tarmac/51641/console looks ok. | 10:52 |
soren | Oh. | 10:53 |
soren | heh. | 10:53 |
soren | It requires that the bzr branch actually itself claims to fix the bug. | 10:53 |
soren | that one didn't. | 10:53 |
ttx | the --fixes stuff ? | 10:53 |
soren | Someone must have manually linked the branch and the bug. | 10:53 |
soren | Yes. | 10:53 |
soren | This is correct behaviour, IMO. | 10:53 |
*** trin_cz has quit IRC | 10:54 | |
soren | Just linking a bug and a branch doesn't necessarily mean that the branch fixes the bug. They're just somehow releated. | 10:54 |
ttx | right, but then we need to encourage use of --fixes if that's the only way to autoclose | 10:54 |
soren | Certainly. | 10:55 |
soren | Ok, this makes no sense. | 10:55 |
soren | I'm on a Natty box. | 10:55 |
soren | /usr/bin/python points to /usr/bin/python2.7 | 10:55 |
soren | If I run "python run_tests.py", it fails. If I run "python2.7 run_tests.py", it works. | 10:56 |
ttx | that sounds strange | 10:58 |
ttx | about --fixes, does using it automatically link the resulting branch to the bug ? | 10:58 |
soren | Yes. | 10:59 |
ttx | ok | 10:59 |
*** ewanmellor has quit IRC | 11:00 | |
*** tomo_bot_______6 has quit IRC | 11:03 | |
ttx | soren: could you set https://code.launchpad.net/~annegentle/nova/fixnewscript/+merge/45085 to Approved ? I think it's good now | 11:03 |
*** tomo_bot_______6 has joined #openstack | 11:04 | |
soren | ttx: Done. | 11:04 |
*** tomo_bot_______7 has joined #openstack | 11:05 | |
*** fabiand_ has joined #openstack | 11:08 | |
*** tomo_bot_______6 has quit IRC | 11:08 | |
*** colinnich has quit IRC | 11:12 | |
*** colinnich has joined #openstack | 11:12 | |
openstackhudson | Project nova build #390: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/390/ | 11:13 |
openstackhudson | Tarmac: Had to abandon the other branch (~annegentle/nova/newscript) because the diffs weren't working right for me. This is a fresh branch that should be merged correctly with trunk. Thanks for your patience. :) | 11:13 |
soren | That's going to look awesome in the ChangeLog. | 11:14 |
ttx | heh | 11:14 |
ttx | soren: https://code.launchpad.net/~openstack-gd/nova/nova-avail-zones/+merge/44878 pep8 woes were fixed so it is also ready for an Approved switch | 11:15 |
ttx | this proxying is a bit inefficient, I should fix that. | 11:16 |
soren | Um.. No. | 11:19 |
soren | He fixed a fraction of the pep8 problems. | 11:19 |
ttx | ah? I just checked them... | 11:19 |
ttx | maybe I'm not using the same --anal-checks flags | 11:20 |
soren | Looking at the million things hudson complained about, and then looking at the one patch he seems to have added since then: http://bazaar.launchpad.net/~openstack-gd/nova/nova-avail-zones/revision/490 | 11:20 |
soren | Or was 489 after the attempted merge, too? | 11:21 |
* ttx digs deeper | 11:21 | |
soren | Hm...You seem to be right. | 11:21 |
ttx | that message about rev490 is a bit confusing indeed | 11:24 |
ttx | since the new merge proposal includes 489 and 490. | 11:24 |
soren | Ok, reviewed. It looks good. | 11:25 |
soren | re-approved. | 11:26 |
*** reldan has joined #openstack | 11:30 | |
*** tomo_bot_______7 has quit IRC | 11:30 | |
*** tomo_bot_______7 has joined #openstack | 11:30 | |
openstackhudson | Project nova build #391: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/391/ | 11:33 |
openstackhudson | Tarmac: Added support of availability zones for compute. | 11:33 |
openstackhudson | models.Service got additional field availability_zone and was created ZoneScheduler that make decisions based on this field. | 11:33 |
openstackhudson | Also replaced fake 'nova' zone in EC2 cloud api. | 11:33 |
*** littleidea has quit IRC | 11:35 | |
*** tomo_bot_______8 has joined #openstack | 11:35 | |
*** tomo_bot_______7 has quit IRC | 11:35 | |
uvirtbot | New bug: #701864 in nova "nova.tests.test_cloud.CloudTestCase.test_associate_disassociate_address fails under python 2.7" [Medium,In progress] https://launchpad.net/bugs/701864 | 11:35 |
*** kashyapc has quit IRC | 11:40 | |
*** trin_cz has joined #openstack | 11:44 | |
*** sandywalsh has joined #openstack | 11:50 | |
soren | Muhahah. | 11:51 |
* soren updates http://wiki.openstack.org/Nova/EucalyptusFeatureComparison for the second time today. | 11:51 | |
sandywalsh | o/ | 11:52 |
*** ctennis has quit IRC | 12:02 | |
*** skrusty has quit IRC | 12:04 | |
*** gustavomzw has joined #openstack | 12:14 | |
*** skrusty has joined #openstack | 12:16 | |
*** gustavomzw has quit IRC | 12:16 | |
Whoop | Anyone running OpenStack successfully on Ubuntu Lucid? | 12:27 |
*** ctennis has joined #openstack | 12:27 | |
ttx | Whoop: Nova, Swift ? Version ? | 12:28 |
Whoop | Sorry, Nova - don't mind the version (I've never used OpenStack prior - I just dont want to attempt on an OS with known problems) | 12:28 |
ttx | I have been running the bleeding-edge trunk successfully... but there is lots of flux right now with FeatureFreeze coming up tomorrow, so YMMV | 12:29 |
uvirtbot | New bug: #701880 in nova "[BFE] Merge ~citrix-openstack/nova/xenapi-glance-2 for bexar-xenapi-support-for-glance blueprint" [Undecided,New] https://launchpad.net/bugs/701880 | 12:31 |
Whoop | At a guess, how long do you recon it'd take for someone (me) to install from scratch if they've never used it before? | 12:31 |
soren | Whoop: Depends. Single machine install... 5 minutes. | 12:37 |
soren | Whoop: Multi-node install. More. | 12:37 |
Whoop | well I mean the central management machine :P | 12:39 |
Whoop | not on each node :) | 12:39 |
Whoop | (each node is easy - tis all the same, can auto deploy that stuff) | 12:39 |
*** DubLo7 has quit IRC | 12:40 | |
soren | There is no "central management machine" in Nova. | 12:40 |
Whoop | Hmmm ok | 12:40 |
Whoop | Guess I best go read shit | 12:40 |
Whoop | ta | 12:40 |
*** kashyapc has joined #openstack | 12:52 | |
*** rackerhacker is now known as rkrhkr | 12:53 | |
*** rkrhkr is now known as rackerhacker | 12:53 | |
*** pvo_away is now known as pvo | 12:57 | |
*** pvo is now known as pvo_away | 12:58 | |
*** pvo_away is now known as pvo | 12:58 | |
*** reldan has quit IRC | 12:59 | |
*** sandywalsh has quit IRC | 13:03 | |
*** sandywalsh has joined #openstack | 13:05 | |
*** ramkrsna has quit IRC | 13:08 | |
*** DubLo7 has joined #openstack | 13:08 | |
soren | So, Swift people... What are the odds the S3 API branch is going to land for Bexar? | 13:24 |
notmyname | creiht: ^ | 13:31 |
ttx | it's proposed, pending reviews, I think the odds are reasonably good | 13:33 |
*** drico has quit IRC | 13:36 | |
*** reldan has joined #openstack | 13:47 | |
*** hadrian has joined #openstack | 13:48 | |
soren | \o/ | 13:54 |
soren | \o/ * 2, even. | 13:54 |
soren | \o/ that the S3 API branchis likely to land, and \o/ that deja-dup now has Cloud Files support. | 13:54 |
*** jdarcy has joined #openstack | 13:55 | |
*** westmaas has joined #openstack | 13:56 | |
*** adiantum has joined #openstack | 13:59 | |
uvirtbot | New bug: #701904 in nova "Logging handler warning when running nova-manage" [Undecided,New] https://launchpad.net/bugs/701904 | 14:06 |
*** jfluhmann has joined #openstack | 14:14 | |
*** gustavomzw has joined #openstack | 14:15 | |
* soren takes a (long) break | 14:16 | |
*** gustavomzw has quit IRC | 14:16 | |
*** gondoi has joined #openstack | 14:23 | |
*** ppetraki has joined #openstack | 14:26 | |
*** opengeard has quit IRC | 14:28 | |
*** pvo is now known as pvo_away | 14:31 | |
*** adiantum_ has joined #openstack | 14:36 | |
*** nelson__ has quit IRC | 14:38 | |
*** nelson__ has joined #openstack | 14:39 | |
*** jdarcy has quit IRC | 14:44 | |
*** pvo_away is now known as pvo | 14:47 | |
*** dirakx has joined #openstack | 14:55 | |
*** jdarcy has joined #openstack | 14:57 | |
*** zul has joined #openstack | 15:01 | |
*** sandywalsh has quit IRC | 15:18 | |
*** rcc has joined #openstack | 15:19 | |
*** zul has quit IRC | 15:21 | |
*** sandywalsh has joined #openstack | 15:25 | |
*** hadrian has quit IRC | 15:26 | |
*** reldan has quit IRC | 15:40 | |
*** hggdh has joined #openstack | 15:43 | |
*** zul has joined #openstack | 15:51 | |
*** reldan has joined #openstack | 15:53 | |
*** abecc has joined #openstack | 15:55 | |
*** dragondm has joined #openstack | 15:56 | |
*** rnirmal has joined #openstack | 15:57 | |
*** hggdh has quit IRC | 15:59 | |
*** hggdh has joined #openstack | 15:59 | |
*** fabiand_ has quit IRC | 16:09 | |
*** Guest57763 has joined #openstack | 16:10 | |
*** fabiand_ has joined #openstack | 16:11 | |
*** abecc has quit IRC | 16:12 | |
*** fabiand_ has quit IRC | 16:12 | |
*** gustavomzw has joined #openstack | 16:15 | |
*** f4m8 is now known as f4m8_ | 16:16 | |
*** gustavomzw has quit IRC | 16:18 | |
*** dfg_ has joined #openstack | 16:23 | |
*** blamar has joined #openstack | 16:26 | |
*** henrichrubin has joined #openstack | 16:32 | |
henrichrubin | hi, anyone how to debug this: "ERROR:root:AMQP server on localhost:5672 is unreachable. Trying again in 10 seconds." occurs on nova-compute during "nova.sh run". i was previously able to run nova using this same machine and code branch. | 16:33 |
*** kashyapc has quit IRC | 16:34 | |
jaypipes | annegentle: pls see my note to you here: https://code.launchpad.net/~morita-kazutaka/nova/sheepdog/+merge/45093. thanks! | 16:42 |
jaypipes | henrichrubin: do you have rabbit-mq running? | 16:43 |
*** calavera has quit IRC | 16:44 | |
henrichrubin | jaypipes: yes "# ps -ef | grep rabbit | 16:45 |
henrichrubin | rabbitmq 1865 1 0 Jan07 ? 00:00:00 /usr/lib/erlang/erts-5.7.4/bin/epmd -daemon" | 16:45 |
*** dragondm has quit IRC | 16:45 | |
jaypipes | henrichrubin: does it only occur when running nova.sh or does it happen also when running nova-compute (or nova-scheduler) separately? | 16:46 |
*** gustavomzw has joined #openstack | 16:47 | |
*** gustavomzw has quit IRC | 16:48 | |
*** Cybo has joined #openstack | 16:50 | |
*** kashyapc has joined #openstack | 16:51 | |
*** Guest57763 has left #openstack | 16:54 | |
henrichrubin | jaypipes: i killed the rabbitmq process and manually restarted. same error occurs using either nova.sh or manually running nova-compute. | 16:54 |
jaypipes | henrichrubin: k. please report a bug then. that shouldn't be happening :) | 16:55 |
jaypipes | henrichrubin: pls mention in the bug report that same error occurs regardless of using nova.sh or standalones.. | 16:55 |
jaypipes | henrichrubin: ty! | 16:55 |
henrichrubin | jaypipes: ok, i'll report a bug. any other ideas how to fix it? or why the server is unreachable, what can i do to test manually? | 16:57 |
*** Ryan_Lane is now known as Ryan_Lane|away | 16:58 | |
*** jfluhmann has quit IRC | 16:59 | |
jaypipes | henrichrubin: not sure. vishy, any suggestions? | 16:59 |
*** brd_from_italy has quit IRC | 16:59 | |
*** dendrobates is now known as dendro-afk | 17:01 | |
*** rnirmal_ has joined #openstack | 17:01 | |
*** dendro-afk is now known as dendrobates | 17:02 | |
*** sophiap has joined #openstack | 17:03 | |
*** rnirmal has quit IRC | 17:04 | |
*** rnirmal_ is now known as rnirmal | 17:04 | |
*** dragondm has joined #openstack | 17:06 | |
eday | mtaylor: pong | 17:08 |
annegentle | jaypipes: taking a look now. | 17:08 |
mtaylor | eday: I was going to ask you a question about that exception test you did a while back, but I got it sorted | 17:09 |
colinnich | notmyname: Hi. What DNS library does cname_lookup require? | 17:09 |
notmyname | dnspython | 17:09 |
eday | mtaylor: ahh, ok | 17:09 |
colinnich | notmyname: I feared that - installing that library seems to break my system | 17:10 |
colinnich | notmyname: proxy-server doesn't start properly and st hangs too | 17:10 |
notmyname | did you install it with the deb or with easy_install | 17:10 |
colinnich | with apt-get | 17:10 |
colinnich | apt-get install python-dnspython | 17:11 |
*** ksteward has quit IRC | 17:11 | |
notmyname | ya, some other people have said that it breaks with the debian package. easy_install it, and it should work. we probably should make a more recent deb and put it in a ppa | 17:12 |
colinnich | notmyname: ok, I'll give that a go | 17:12 |
*** ccustine has joined #openstack | 17:13 | |
colinnich | notmyname: result. All is well, and proxy-server starts with cname_lookup loaded... now to test.. thanks | 17:14 |
notmyname | glad it works | 17:16 |
*** jfluhmann has joined #openstack | 17:16 | |
colinnich | notmyname: was planning on doing some documentation for cname and domain_remap. Need to get contribution permission first from my boss.... | 17:17 |
notmyname | great! | 17:17 |
colinnich | notmyname: also fixed a small problem with domain_remap relating to lower case reseller prefix | 17:17 |
colinnich | notmyname: cnames working :-) | 17:21 |
notmyname | yay! | 17:21 |
colinnich | notmyname: everything working great for me now, using trunk (and swauth) | 17:22 |
creiht | colinnich: nice | 17:23 |
*** guigui has quit IRC | 17:24 | |
*** befreax has quit IRC | 17:24 | |
*** ibarrera has quit IRC | 17:33 | |
*** hadrian has joined #openstack | 17:36 | |
*** mray has joined #openstack | 17:37 | |
*** rnirmal has quit IRC | 17:45 | |
*** rnirmal has joined #openstack | 17:46 | |
*** rlucio has joined #openstack | 17:47 | |
*** jdurgin has joined #openstack | 17:48 | |
uvirtbot | New bug: #702010 in nova "novarc template points CloudServer Auth URL to wrong port" [Undecided,New] https://launchpad.net/bugs/702010 | 17:51 |
*** maplebed has joined #openstack | 17:51 | |
*** joearnold has joined #openstack | 17:53 | |
*** maplebed has quit IRC | 17:56 | |
*** sophiap has quit IRC | 17:59 | |
*** sophiap_ has joined #openstack | 17:59 | |
jk0 | has anything changed recently in trunk that would prevent API requests from showing up in nova-api stdout? | 18:04 |
*** adiantum_ has quit IRC | 18:04 | |
*** irahgel has left #openstack | 18:05 | |
*** rlucio has quit IRC | 18:06 | |
*** Ryan_Lane|away is now known as Ryan_Lane | 18:07 | |
*** maplebed has joined #openstack | 18:07 | |
eday | jk0: the newlog branch perhaps, it might be going to a proper log no | 18:09 |
eday | now | 18:09 |
jk0 | ah | 18:09 |
jk0 | hm | 18:09 |
*** dirakx has quit IRC | 18:10 | |
jk0 | doesn't look like it is -- I'm running verbose and nodaemon | 18:10 |
dabo | ok, I've fixed the merge problems with https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537. | 18:13 |
dabo | For some reason bazaar removed and re-added several files that had been added in a previous trunk merge, making for a messy diff. Sorry about that. | 18:13 |
*** troytoman has joined #openstack | 18:14 | |
*** irahgel has joined #openstack | 18:19 | |
*** ccustine has quit IRC | 18:23 | |
*** rlucio has joined #openstack | 18:30 | |
sandywalsh | super fast review for some keen core devs? https://code.launchpad.net/~sandy-walsh/nova/lp702010/+merge/46022 | 18:33 |
jk0 | it's got my approval :) | 18:35 |
sandywalsh | nice ... one more core and we're golden (thx too vishy) | 18:37 |
*** ccustine has joined #openstack | 18:37 | |
vishy | any devs feel like tackling this one? https://bugs.launchpad.net/nova/+bug/702040 | 18:42 |
uvirtbot | Launchpad bug 702040 in nova "Virtio net drivers do not work" [Undecided,New] | 18:42 |
openstackhudson | Project nova build #392: SUCCESS in 1 min 26 sec: http://hudson.openstack.org/job/nova/392/ | 18:48 |
openstackhudson | Tarmac: My previous modifications to novarc had CLOUDSERVER_AUTH_URL pointing to the ec2 api port. Now it's correctly pointing to os api port. | 18:48 |
*** trin_cz has quit IRC | 18:49 | |
*** rcc has quit IRC | 18:49 | |
sandywalsh | eday, are you suggesting I take all the flags out of -api and -combined and put them in flags? | 18:50 |
eday | sandywalsh: at least that group of 4 (port/host) | 18:50 |
xtoddx | sandywalsh: look at the lp:~anso/nova/wsgirouter branch, it involves moving that stuff away and into wsgi configuration | 18:50 |
xtoddx | s/wsgi configuration/paste.deploy configuration/ | 18:51 |
eday | oh, probably better just to wait for that then | 18:51 |
eday | ignore me :) | 18:51 |
sandywalsh | xtoddx, the reason I needed it in flags was for the template generation, it isn't required in flags otherwise | 18:52 |
uvirtbot | New bug: #702040 in nova "Virtio net drivers do not work" [Undecided,New] https://launchpad.net/bugs/702040 | 18:52 |
sandywalsh | eday, this is a small fix for a real pita. Perhaps it's easier for the wsgirouter branch to adjust rather than wait? | 18:54 |
xtoddx | sandywalsh: i didn't think that flag was being used else where | 18:54 |
xtoddx | sandywalsh: go ahead and merge yours and i'll fix mine later (yours is pretty small) | 18:55 |
sandywalsh | xtoddx, it wasn't, but I added cloudservers vars to novarc | 18:55 |
sandywalsh | xtoddx, thx! | 18:55 |
*** lcfseth has joined #openstack | 19:01 | |
*** kpepple has joined #openstack | 19:04 | |
*** reldan has quit IRC | 19:06 | |
*** belred has joined #openstack | 19:07 | |
belred | i want to look into using openstack for my company. I'm looking at the openstack web site, and I'm having trouble finding a starting place to lean about it, what it offers and a user's guide. Where should I start? | 19:09 |
kpepple | belred: are you looking for features / functionality or more architectural / technical discussions ? | 19:10 |
belred | I want to read both | 19:11 |
belred | i guess features and functionality would a be simpler to start with. | 19:12 |
belred | I've clicked through so many pages, but I feel lost. | 19:12 |
kpepple | soooo … openstack is long on technical / code detail but fairly short on overview / deployment advice. Although this probably isn't what you want to hear, the best way to get what you want may be to pull the code and compile the docs :( | 19:13 |
kpepple | these docs are migrating to the wiki (and should be mostly there by Bexar release in early February) but mostly aren't there today. | 19:14 |
rlucio | belred: you look at this? http://nova.openstack.org/ | 19:14 |
rlucio | that should be the most up-to-date info | 19:14 |
rlucio | and at least the key concepts should give you an idea about whats going on | 19:14 |
belred | thanks | 19:15 |
kpepple | rlucio: thanks, didn't know that was up | 19:15 |
kpepple | belred: the link that rlucio gave is the docs from source | 19:15 |
belred | So, the complete documentation is checked into bzr? | 19:15 |
rlucio | np | 19:15 |
*** hggdh has quit IRC | 19:15 | |
rlucio | yes | 19:15 |
kpepple | belred: i believe all docs are in the doc/ directory in bzr | 19:15 |
rlucio | the docs are checked in | 19:15 |
sandywalsh | belred, annegentle handles openstack docs | 19:15 |
fraggeln | nova != swift right? | 19:16 |
rlucio | right :) | 19:16 |
*** belred has quit IRC | 19:16 | |
sandywalsh | fraggeln, correct. nova = compute, swift = storage | 19:16 |
sandywalsh | fraggeln, glance = swift for nova images | 19:16 |
*** mdomsch has joined #openstack | 19:17 | |
soren | \o/ | 19:17 |
* soren now has code in bzr | 19:17 | |
soren | I mean.. I've got a patch accepted into bzr. I've had code managed "in bzr" for a while :) | 19:18 |
Whoop | :q | 19:18 |
Whoop | doh, ignore >_< | 19:18 |
soren | It's spelled "/quit" | 19:19 |
soren | :) | 19:19 |
*** EdwinGrubbs is now known as Edwin-afk2 | 19:22 | |
jdurgin | soren: I updated https://code.launchpad.net/~jdurgin/nova/rbd_volume/+merge/45091 if you could take another look when you have a moment that'd be great | 19:23 |
*** paultag has joined #openstack | 19:25 | |
czajkowski | paultag: aloha | 19:28 |
czajkowski | paultag: meet soren | 19:28 |
czajkowski | soren: meet paultag | 19:28 |
soren | jdurgin: approved | 19:29 |
soren | paultag: o/ | 19:29 |
*** odyi has quit IRC | 19:32 | |
paultag | heyya soren | 19:32 |
paultag | soren: czajkowski insisted that I get in touch with you guys while I go about my job searches :) | 19:32 |
paultag | I can't say that I complained too much ;) | 19:33 |
czajkowski | I said talk to not insist wise ass | 19:33 |
paultag | oi oi! | 19:33 |
soren | paultag: Heh :) | 19:35 |
soren | -> /msg | 19:36 |
paultag | cheers :) | 19:36 |
openstackhudson | Project nova build #393: SUCCESS in 1 min 24 sec: http://hudson.openstack.org/job/nova/393/ | 19:38 |
openstackhudson | Tarmac: This branch adds a backend for using RBD (RADOS Block Device) volumes in nova via libvirt/qemu. | 19:38 |
openstackhudson | This is described in the blueprint here: https://blueprints.launchpad.net/nova/+spec/ceph-block-driver | 19:38 |
openstackhudson | Testing requires Ceph and the latest qemu and libvirt from git. Instructions for installing these can be found on the Ceph wiki (http://ceph.newdream.net/wiki/#Getting_Ceph and http://ceph.newdream.net/wiki/QEMU-RBD). | 19:38 |
*** lcfseth has left #openstack | 19:41 | |
soren | jdurgin: ^ congrats :) | 19:43 |
*** ccustine has quit IRC | 19:44 | |
dabo | Thanks to eday, I've fixed the merge issues. Reviews please! https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 | 19:44 |
*** brd_from_italy has joined #openstack | 19:47 | |
*** sophiap_ has quit IRC | 19:48 | |
jdurgin | soren: thanks! | 19:50 |
soren | jdurgin: Thank *you*! | 19:51 |
jt_zg | Is there a way to contribute to the documentation here: http://swift.openstack.org/howto_installmultinode.html ? | 19:53 |
*** brd_from_italy has quit IRC | 19:53 | |
kpepple | jt_zg: that's probably a rst doc in the doc/ tree within the swift bzr repo. | 19:55 |
jt_zg | kpepple, thanks for the tip. I'll check that out! | 19:55 |
annegentle | jt_zg: yep, that's right. it's an RST doc. (kpepple is fast on the keyboard!) | 19:57 |
jt_zg | annegentle, Indeed he is. I felt my hair whooshing around. | 19:57 |
*** arthurc has quit IRC | 19:57 | |
jt_zg | annegentle, also want to say hi. Just joined your doc group :) | 19:58 |
kpepple | annegentle: what are you using to edit the rst docs ? is there a good editor .. beyond just using vim ? | 19:58 |
*** reldan has joined #openstack | 19:59 | |
*** opengeard has joined #openstack | 19:59 | |
annegentle | jt_zg: oh, hi! Thanks for making the connection. I just use a text editor. With a spell check :) | 19:59 |
*** westmaas has quit IRC | 20:01 | |
jt_zg | annegentle, is there any set of docs that require some work? | 20:02 |
devcamcar | hey all, after lunch i'll be dropping the nebula dashboard code into launchpad | 20:03 |
devcamcar | today is the day | 20:03 |
soren | I would really appreciate a review of https://code.launchpad.net/~soren/nova/lp701864/+merge/45976 It's quite straight forward and it's blocking package builds on Natty. | 20:03 |
creiht | woot | 20:03 |
soren | devcamcar: Project name? | 20:03 |
kpepple | devcamcar: is this the django dashboard ? | 20:03 |
*** brd_from_italy has joined #openstack | 20:04 | |
devcamcar | it will be 2 launchpad repos | 20:04 |
devcamcar | one called django-nova which is the django module | 20:04 |
devcamcar | and openstack-dashboard-django or something like that, need to talk to ttx about naming, but this one is a simple reference django site that uses the module | 20:04 |
*** westmaas has joined #openstack | 20:06 | |
soren | devcamcar: Cool. Let me know when it's up. I'm really, really looking forward to seeing it. | 20:08 |
devcamcar | soren: here is a sneak peek: https://launchpad.net/django-nova | 20:08 |
devcamcar | it needs documentation | 20:08 |
annegentle | jt_zg: here's the set of goals for Bexar (the current release) http://www.openstack.org/blog/2010/11/doc-plans-for-upcoming-openstack-releases/ Some of it is done, but tutorials are not yet complete. | 20:08 |
devcamcar | and i haven't pushed the reference site to make use of it yet, but there is code to browse | 20:09 |
*** johnpur has joined #openstack | 20:09 | |
*** ChanServ sets mode: +v johnpur | 20:09 | |
soren | devcamcar: It's really the reference site I'm dying to see :) | 20:09 |
devcamcar | yep, after lunch, which i am now off to :) | 20:10 |
soren | devcamcar: Enjoy. | 20:10 |
devcamcar | i'll let you know when its up | 20:10 |
annegentle | devcamcar: I wonder if just one new document on the nova site would cover it? Feel free to ping me for doc ideas. | 20:10 |
devcamcar | annegentle: will do :) | 20:10 |
*** MarkAtwood has joined #openstack | 20:11 | |
*** BK_man has quit IRC | 20:11 | |
*** kpepple has left #openstack | 20:15 | |
*** adiantum has quit IRC | 20:15 | |
xtoddx | quick 2-line bugfix review needed: https://code.launchpad.net/~anso/nova/managelog/+merge/46033 | 20:16 |
*** mray has quit IRC | 20:21 | |
jt_zg | annegentle, I'll do some reading and get back to you if you'd like | 20:22 |
annegentle | jt_zg: please do, feel free to email anne@openstack.org or just talk to me here on IRC | 20:23 |
jt_zg | annegentle, will do. #5 on that list caught my eye | 20:24 |
*** opengeard has quit IRC | 20:24 | |
*** trin_cz has joined #openstack | 20:24 | |
annegentle | jt_zg: excellent. there's a huge need for tutorials :) | 20:28 |
jt_zg | annegentle, my area of expertise! I'll spin up some VPS' this weekend and try and bang something out for 5.1 and 5.2 | 20:29 |
dabo | xtoddx: reviewed and approved | 20:31 |
xtoddx | dabo: thx | 20:31 |
uvirtbot | New bug: #702106 in swift "Functional Tests in test/functional Should Look For Config File in /etc" [Undecided,New] https://launchpad.net/bugs/702106 | 20:36 |
*** reldan has quit IRC | 20:39 | |
*** westmaas has quit IRC | 20:40 | |
uvirtbot | New bug: #702107 in swift "swauth bins should default to the right port if no port is given" [Low,Triaged] https://launchpad.net/bugs/702107 | 20:41 |
*** fabiand_ has joined #openstack | 20:42 | |
*** westmaas has joined #openstack | 20:43 | |
*** hggdh has joined #openstack | 20:44 | |
*** sophiap has joined #openstack | 20:46 | |
*** reldan has joined #openstack | 20:55 | |
*** fabiand_ has quit IRC | 20:58 | |
*** mdomsch has quit IRC | 20:58 | |
*** ctennis has quit IRC | 21:00 | |
*** Cybo has quit IRC | 21:00 | |
*** hggdh has quit IRC | 21:08 | |
*** hggdh has joined #openstack | 21:08 | |
*** hggdh has quit IRC | 21:10 | |
*** hggdh_ has joined #openstack | 21:10 | |
soren | Save the Ubuntu Natty builds! Approve https://code.launchpad.net/~soren/nova/lp701864/+merge/45976 ! | 21:10 |
*** daleolds has joined #openstack | 21:11 | |
*** BK_man has joined #openstack | 21:13 | |
*** Edwin-afk2 is now known as EdwinGrubbs | 21:14 | |
*** kpepple has joined #openstack | 21:16 | |
*** DubLo7 has quit IRC | 21:17 | |
*** nati has joined #openstack | 21:22 | |
*** ctennis has joined #openstack | 21:22 | |
*** ctennis has joined #openstack | 21:22 | |
*** abecc has joined #openstack | 21:23 | |
_0x44 | jaypipes: I made those updates you requested. | 21:23 |
vishy | some of you may find this useful: http://unchainyourbrain.com/openstack/12-testing-nova-openstack-compute-with-vagrant-and-chef | 21:28 |
*** westmaas has quit IRC | 21:30 | |
devcamcar | xtoddx: approved managelog branch | 21:32 |
xtoddx | devcamcar: thanks | 21:33 |
openstackhudson | Project nova build #394: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/394/ | 21:33 |
openstackhudson | Tarmac: _wait_with_callback was changed out from under suspend/resume. fixed. | 21:33 |
*** ccustine has joined #openstack | 21:33 | |
henrichrubin | jaypipes: i was able to fix the rabbit-mq error but manually removing and reinstalling it. note, that i had to use dpkg --purge for it to work fully. | 21:34 |
henrichrubin | but now i have this error when i try to retrieve novarc from nova-manage "AttributeError: 'unicode' object has no attribute 'access'." any idea? | 21:35 |
nati | I think there is a bug in get_environment_rc in auth/manager.py | 21:37 |
nati | Method signature is wrong self.__generate_rc(user.access, user.secret, pid, use_dmz) | 21:38 |
nati | This should be self.__generate_rc(user, pid, use_dmz) | 21:38 |
annegentle | vishy: nice... added a link to it on http://wiki.openstack.org/NovaVirtually | 21:40 |
*** miclorb_ has joined #openstack | 21:40 | |
openstackhudson | Project nova build #395: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/395/ | 21:43 |
openstackhudson | Tarmac: Initialize logging in nova-manage so we don't see errors about missing handlers. | 21:43 |
*** gustavomzw has joined #openstack | 21:44 | |
soren | vishy: https://code.launchpad.net/~soren/nova/lp701864/+merge/45976 Can you have another look see? | 21:47 |
*** kpepple has left #openstack | 21:49 | |
soren | vishy: How slow is your network really? (looking at bug 702040) | 21:52 |
uvirtbot | Launchpad bug 702040 in nova "Virtio net drivers do not work" [Undecided,New] https://launchpad.net/bugs/702040 | 21:52 |
*** dirakx has joined #openstack | 21:52 | |
vishy | one sec, looking up my tests | 21:53 |
henrichrubin | nati: thx. you are right. do you want me to file a bug report or do you? | 21:54 |
nati | henrichrubin: If you do'nt mind, would you please send report? I'm working on IPV6 now :) | 21:55 |
vishy | soren: as i recall it was approx 100 MB/s vs 300 MB/s | 21:55 |
vishy | both are slow, but non-virtio is just terrible | 21:55 |
henrichrubin | nati: ok, i'll do it. | 21:57 |
soren | vishy: 100 MB/s is slow?!? What sort of alien gear do you guys have at NASA? | 21:57 |
vishy | 10GE | 21:58 |
soren | 100MB/s is exactly what you can expect when you're emulating a gigabit nic. | 21:58 |
vishy | k apparently i misremembered or perhaps the network is a bit more loaded | 21:59 |
vishy | virtio: 571.4375 MB / 10.00 sec = 479.2965 Mbps 97 %TX 35 %RX 0 retrans 1.52 msRTT | 21:59 |
soren | Yeah, that's not impressive. | 21:59 |
vishy | which is what 60? | 21:59 |
soren | Which kernel is this on? (host, I mean) | 21:59 |
vishy | I seem to remember it better than that | 21:59 |
vishy | host kernel is old | 21:59 |
vishy | lucid kernel and probably outdated | 22:00 |
soren | I forget when macvtap landed. | 22:00 |
soren | Post-lucid for sure. | 22:00 |
*** odyi has joined #openstack | 22:01 | |
soren | But yeah, without virtio you're not going to get above a gigabit. | 22:02 |
vishy | soren: non-virtio to virtio 237.1875 MB / 10.00 sec = 198.9326 Mbps 99 %TX 35 %RX 0 retrans 1.16 msRTT | 22:03 |
vishy | soren: non to non 205.3750 MB / 9.99 sec = 172.3916 Mbps 99 %TX 70 %RX 0 retrans 0.63 msRTT | 22:05 |
vishy | soren: virtio to non-virtio 369.7717 MB / 10.00 sec = 310.0667 Mbps 7 %TX 87 %RX 0 retrans 0.97 msRTT | 22:06 |
vishy | seems odd that we are well under a gig with virtio | 22:07 |
vishy | considering it is 10G underneath. | 22:07 |
vishy | perhaps we just need to upgrade the hosts | 22:07 |
*** drico has joined #openstack | 22:10 | |
*** ChumbyJay has joined #openstack | 22:10 | |
*** hggdh_ has quit IRC | 22:14 | |
*** ppetraki has quit IRC | 22:15 | |
soren | vishy: It would be an interesting experiment at least. | 22:16 |
*** mdomsch has joined #openstack | 22:20 | |
openstackhudson | Project nova build #396: SUCCESS in 1 min 24 sec: http://hudson.openstack.org/job/nova/396/ | 22:23 |
openstackhudson | Tarmac: Fix test failures on Python 2.7 by eagerly loading the fixed_ip attribute on instances. No clue why it doesn't affect python 2.6, though. | 22:23 |
soren | \o/ | 22:23 |
devcamcar | kerplunk | 22:24 |
devcamcar | https://code.launchpad.net/openstack-dashboard | 22:24 |
devcamcar | https://launchpad.net/django-nova | 22:24 |
devcamcar | https://launchpad.net/openstack-dashboard is better link | 22:25 |
devcamcar | so openstack-dashboard is a reference django implementation of django-nova | 22:25 |
devcamcar | django-nova is a django module meant to be reused by however many different django sites | 22:25 |
*** rnirmal has quit IRC | 22:26 | |
vishy | sexy! | 22:27 |
*** sophiap has quit IRC | 22:28 | |
dabo | soren: can you re-review https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 for the changes I made? | 22:28 |
soren | devcamcar: Screenshots! | 22:28 |
vishy | devcamcar: yeah test deploy up somewhere! | 22:29 |
*** fcarsten has joined #openstack | 22:29 | |
devcamcar | yes, lots to be done | 22:29 |
* vishy wants to see jake's ui changes | 22:29 | |
devcamcar | i for one am happy that it now has a README | 22:29 |
devcamcar | not all of jake's ui stuff is in, its still a bit ugly in its current state | 22:29 |
devcamcar | but not too bad | 22:30 |
devcamcar | he's going to be fixing it up tomorrow | 22:30 |
devcamcar | hah, just received an email from ewan asking when dashboard will be released | 22:31 |
soren | "Someone" needs to be more on IRC :) | 22:31 |
vishy | devcamcar: now that dashboard is up we just need to make it not suck :) | 22:32 |
* vishy is referring specifically to the auth through the ec2 api | 22:32 | |
devcamcar | vishy: indeed! | 22:32 |
devcamcar | i think its a good candidate to use easyapi | 22:33 |
*** burris has quit IRC | 22:33 | |
devcamcar | so hopefully we can rip out all the ec2 specific stuff soon | 22:33 |
* vishy agrees | 22:33 | |
*** gustavomzw has quit IRC | 22:35 | |
fcarsten | swift (beginner) question: Is there a way to tell whether a swift installation has reached consistency (i.e. is currently consistent)? | 22:36 |
*** brd_from_italy has quit IRC | 22:36 | |
creiht | fcarsten: a good indicator is the dispersion report | 22:36 |
creiht | fcarsten: well that depends on what do you mean by consistent | 22:37 |
fcarsten | creight: where do I find that report? | 22:37 |
creiht | fcarsten: http://swift.openstack.org/admin_guide.html#cluster-health | 22:37 |
*** burris has joined #openstack | 22:38 | |
creiht | That will let you know in general if replicas of object or container partitions can not be reached | 22:38 |
fcarsten | creight: What I mean: Swift replication claims to become eventually consistent. I want to check if my swift installation works (well or at all) by checking whether (and how fast) it reaches consistency. | 22:38 |
creiht | and is useful if you are adding new storage to the system | 22:38 |
fcarsten | creight: thanks I'll have a look :-) | 22:38 |
creiht | the above will only tell you on the consistency of the partitions in the event of a ring change (or outtage) | 22:39 |
devcamcar | creiht: howdy! refresh my memory - is there any disadvantage to starting with a small number of zones, say 4? | 22:39 |
creiht | devcamcar: hey! The main disadvantage is availability in the case of failure scenarios | 22:40 |
devcamcar | creiht: is 5 the magic number? | 22:40 |
creiht | fcarsten: the other things that you can look at are in the logs to see how long replication is running, and how much was replicated | 22:40 |
creiht | yes | 22:40 |
devcamcar | creiht: awesome, thanks | 22:41 |
creiht | we saw a pretty dramatic difference in testing between 4 and 5 zones | 22:41 |
*** aimon_ has joined #openstack | 22:41 | |
fcarsten | creiht: Thanks again. | 22:42 |
creiht | np | 22:43 |
*** hky has quit IRC | 22:43 | |
*** hky has joined #openstack | 22:44 | |
*** daleolds has quit IRC | 22:44 | |
*** daleolds1 has joined #openstack | 22:44 | |
*** aimon has quit IRC | 22:45 | |
*** aimon_ is now known as aimon | 22:45 | |
*** mray has joined #openstack | 22:46 | |
devcamcar | 3pm pst is a crappy time for launchpad to go down for maintenance. | 22:47 |
devcamcar | that is all. | 22:47 |
eday | devcamcar: it's a great time where it's hosted :) | 22:48 |
xtoddx | sandywalsh: can you review https://code.launchpad.net/~anso/nova/wsgirouter | 22:50 |
sandywalsh | xtoddx, sure thing | 22:50 |
xtoddx | sandywalsh: Mostly I want confirmation I didn't mess up the templating in the way nova-api translates the paste config into flags, and how I renamed the flags | 22:51 |
*** hggdh has joined #openstack | 22:52 | |
sandywalsh | xtoddx, ok, I'll keep that in mind. I have to run for a bit, but will review tonight if that works? | 22:52 |
soren | dabo: I think I've exhausted my reviewing energy for the day. First thing in the morning! | 22:52 |
dabo | soren: ok, thanks! | 22:53 |
*** mdomsch has quit IRC | 22:54 | |
openstackhudson | Project nova-tarmac build #51,894: FAILURE in 15 sec: http://hudson.openstack.org/job/nova-tarmac/51894/ | 23:02 |
sandywalsh | xtoddx, hehe, trying to get the final diff before lp goes offline | 23:03 |
xtoddx | sandywalsh: good luck! | 23:03 |
*** piken_ has joined #openstack | 23:05 | |
*** piken has quit IRC | 23:05 | |
*** troytoman has quit IRC | 23:05 | |
* soren heads bedwards | 23:06 | |
openstackhudson | Project nova-tarmac build #51,895: STILL FAILING in 37 sec: http://hudson.openstack.org/job/nova-tarmac/51895/ | 23:07 |
*** ksteward has joined #openstack | 23:07 | |
*** DubLo7 has joined #openstack | 23:07 | |
openstackhudson | Project nova-tarmac build #51,896: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51896/ | 23:12 |
openstackhudson | Project nova-tarmac build #51,897: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51897/ | 23:17 |
mjmac | nova question, sorry if this is a FAQ, but not finding the answer on nova.openstack.org... how do volumes work? are they block devices shared via AoE/iSCSI? | 23:17 |
mjmac | ah... just found the service arch doc | 23:19 |
openstackhudson | Project nova-tarmac build #51,898: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51898/ | 23:22 |
*** phymata has joined #openstack | 23:26 | |
openstackhudson | Project nova-tarmac build #51,899: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51899/ | 23:27 |
openstackhudson | Project nova-tarmac build #51,900: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51900/ | 23:32 |
*** gondoi has quit IRC | 23:33 | |
*** pvo is now known as pvo_away | 23:33 | |
fcarsten | creiht: Hmmm swift-stats-populate -d just sits there and doesn't do anything. So far 0 containers or objects created. Do I need to install something first (apart from swift) before it works? | 23:35 |
*** abecc has quit IRC | 23:36 | |
openstackhudson | Project nova-tarmac build #51,901: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51901/ | 23:37 |
fcarsten | creiht: never mind. Found the problem: bad auth server URL | 23:40 |
*** deshantm has quit IRC | 23:42 | |
openstackhudson | Project nova-tarmac build #51,902: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51902/ | 23:42 |
*** nate_h has joined #openstack | 23:42 | |
*** deshantm has joined #openstack | 23:45 | |
openstackhudson | Project nova-tarmac build #51,903: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51903/ | 23:47 |
*** hggdh has quit IRC | 23:50 | |
openstackhudson | Project nova-tarmac build #51,904: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51904/ | 23:52 |
*** pvo_away is now known as pvo | 23:57 | |
openstackhudson | Project nova-tarmac build #51,905: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51905/ | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!