vishy | maybe try to change that to logging.exception and see if you can get a traceback for the underlying exception causing the auth failure | 00:00 |
---|---|---|
nelson__ | creiht: well, it's a help when you say that /myfiles shouldn't be going to the account-server ... that gives me some place to look. | 00:00 |
creiht | yeah, I'm trying to figure out how that got there | 00:00 |
creiht | nelson__: I've lost history, can you paste the st command again that you are running when to upload the object? | 00:01 |
*** kevnfx has quit IRC | 00:01 | |
termie | jk0: updated | 00:01 |
nelson__ | oh urgh. Am I being bit by swift-init's silence again?? When I run this: swift-account-server account-server.conf | 00:01 |
nelson__ | I get this: Error trying to load config account-server.conf: Cannot resolve relative uri 'config:account-server.conf'; no context keyword argument given | 00:02 |
dubs | creiht: you guys should rename `st` to `swiftly` or something. st is so boring (and probably a bash alias for a lot of people :) ) | 00:02 |
nelson__ | I'm *pretty* sure that I ought to be able to run any swift server from the command line in the form swift-$SERVICE-server /etc/swift/$SERVICE-server.conf right? | 00:02 |
dubs | swiftly download ... | 00:03 |
creiht | correct | 00:03 |
creiht | dubs: lol | 00:03 |
creiht | I like it :) | 00:03 |
creiht | gholt: -^ | 00:03 |
nelson__ | I like it too. | 00:03 |
nelson__ | and 'st' is already a command, as 'man st' (scsi tape) will tell you. | 00:03 |
creiht | who uses scsi tape any more? :) | 00:03 |
nelson__ | who uses scsi any more? | 00:04 |
creiht | nelson__: that isn't a command | 00:04 |
nelson__ | ST(4) Linux Programmer's Manual ST(4) | 00:04 |
creiht | " Linux Programmer's Manual" not "User Commands" | 00:05 |
nelson__ | haha, yes, of course, man 4 isn't for commands, yes' youre right. | 00:05 |
creiht | hehe | 00:05 |
dubs | still though :) | 00:05 |
nelson__ | still, I like the punnnage of swiftly | 00:05 |
creiht | I still like swiftly reguardless :) | 00:05 |
nelson__ | check it into bzr and tell annegentle to start renaming in the docs. | 00:06 |
creiht | nelson__: make sure you add /etc/swift in front of account-server.conf | 00:06 |
creiht | the error you listed above was due to it not finding the account-server.conf | 00:07 |
nelson__ | I'm in /etc/swift. It must chdir first or something. | 00:07 |
creiht | hrm | 00:07 |
nelson__ | cuz it's running now. | 00:07 |
creiht | k | 00:07 |
*** ctennis has joined #openstack | 00:07 | |
creiht | gotta run... will look at it more tomorrow | 00:08 |
nelson__ | st -A https://a.ra-tes.org:11000/v1.0 -U system:root -K testpass upload myfiles bigfile1.tgz | 00:08 |
nelson__ | okay, thanks for your help. | 00:08 |
dubsquared | vishy: yeah odd…objectstore is hosed | 00:08 |
dubsquared | vishy: i cant even terminate these broken instances etc etc | 00:08 |
dubsquared | going to roll home…but ill leave a message with soren too | 00:09 |
dubsquared | ill chat with you tonight if you're around | 00:09 |
dabo | ok, all tests are passing, pep8 is happy; please review https://code.launchpad.net/~ed-leafe/nova/internal_id_fix/+merge/45460 so we can push our last few merges | 00:10 |
dabo | sandywalsh: OK, I tried to get this pushed into trunk, but it looks like it ain't gonna happen, so feel free to take what you can for your own branch | 00:13 |
*** dubsquared has quit IRC | 00:14 | |
sandywalsh | k, thanks dabo. I'll give password reset a review while I'm at it | 00:14 |
termie | jk0: why are you using 'date'? | 00:15 |
dabo | i've been here too long, and my patience is gone. I'll try to get the password reset pushed later tonight, but now it looks doubtful | 00:15 |
termie | jk0: it is not a date, at the very least it is a datetime, and the real type is 'created_at' | 00:15 |
jk0 | termie: fair enough | 00:15 |
eday | dabo: it looks good minus the one extra ctxt line in there | 00:16 |
tr3buchet | eday you around? | 00:22 |
eday | tr3buchet: just reviewed | 00:22 |
tr3buchet | sweet | 00:22 |
* tr3buchet refreshes page | 00:22 | |
jk0 | I wonder of launchpad could possibly take any longer to update a diff | 00:22 |
jk0 | doesn't take quite long enough | 00:23 |
eday | jk0: I'm sure there are ways :) | 00:23 |
tr3buchet | yeah maybe a sleep 500 would help... | 00:23 |
jk0 | termie: how's it look now? | 00:23 |
*** littleidea has quit IRC | 00:23 | |
tr3buchet | it's even slower across vpn | 00:23 |
*** littleidea has joined #openstack | 00:25 | |
uvirtbot | New bug: #699654 in nova "i18n - Terminating instance with invalid instance id causes error in the response" [Undecided,New] https://launchpad.net/bugs/699654 | 00:26 |
termie | jk0: looks great :) approvedz0r | 00:30 |
jk0 | thanks :) | 00:31 |
jk0 | eday: I feel like making a few more changes, would you mind reviewing it too? :P | 00:31 |
eday | jk0: how much is it worth to you? :) | 00:36 |
jk0 | a beer on me at the next summit | 00:37 |
eday | jk0: you still working on it, or should I look now? | 00:42 |
jk0 | it's ready to go | 00:42 |
jk0 | termie gave the approval | 00:42 |
eday | so, one issue here... why filter at the DB level? Why not return the full list of actions directly from the query (like other methods) and filter/format in the openstack API code (like other data types)? | 00:45 |
eday | ie, what if another future API call wanted more data that was in the action record? | 00:46 |
jk0 | hm, I'm not sure I follow you | 00:46 |
eday | jk0: for example, remove the for loop around actions, and just return the list directly | 00:47 |
jk0 | oh, are you on the latest diff? | 00:47 |
eday | let the servers.py code filter out created/action/error | 00:47 |
eday | yup | 00:47 |
jk0 | that's what it's doing -- sending all of the instance_action records for that instance_id | 00:48 |
openstackhudson | Project nova build #364: SUCCESS in 1 min 18 sec: http://hudson.openstack.org/job/nova/364/ | 00:49 |
eday | I see the diff for r521, which filters out everything but ccreated_at, action, and error | 00:49 |
jk0 | oh, I see what you mean | 00:50 |
jk0 | that's really the only data that's being stored | 00:50 |
jk0 | but I see where you going | 00:50 |
jk0 | *youre | 00:50 |
eday | but if it changes.. and other thigns don't filter at that layer, they filter/format in the API layers (outside the core API) | 00:51 |
jk0 | I'll update that quick | 00:51 |
*** jc_smith has quit IRC | 00:51 | |
rlucio | is HOL still broken? | 00:54 |
rlucio | for nova | 00:54 |
rlucio | ? | 00:54 |
*** jc_smith has joined #openstack | 00:59 | |
*** johnpur has quit IRC | 00:59 | |
jk0 | eday: how about now? | 01:03 |
*** dirakx has joined #openstack | 01:04 | |
*** MarkAtwood has quit IRC | 01:06 | |
*** maplebed has quit IRC | 01:07 | |
*** rlucio has quit IRC | 01:07 | |
openstackhudson | Project nova build #365: SUCCESS in 1 min 18 sec: http://hudson.openstack.org/job/nova/365/ | 01:19 |
*** daleolds has quit IRC | 01:26 | |
*** adiantum has joined #openstack | 01:33 | |
*** jc_smith has quit IRC | 01:35 | |
*** joearnold has quit IRC | 01:36 | |
*** dragondm has quit IRC | 01:41 | |
*** kevnfx has joined #openstack | 01:43 | |
*** dfg_ has quit IRC | 01:47 | |
*** littleidea has quit IRC | 01:48 | |
*** dfg_ has joined #openstack | 01:52 | |
*** dfg_ has quit IRC | 01:57 | |
*** zul has joined #openstack | 02:33 | |
*** Jordandev has quit IRC | 02:47 | |
*** jdurgin has quit IRC | 02:47 | |
*** gasbakid|2 has quit IRC | 02:47 | |
*** trin_cz has quit IRC | 02:53 | |
*** ArdRigh has joined #openstack | 02:55 | |
*** littleidea has joined #openstack | 03:04 | |
*** styro has left #openstack | 03:05 | |
*** lvaughn_ has quit IRC | 03:23 | |
*** littleidea has quit IRC | 03:33 | |
winston-d | one question about nova | 03:45 |
creiht | winston-d: howdy :) | 03:46 |
creiht | any luck with the performance issues? | 03:46 |
winston-d | what would happen to the images, when the instances is shut-downed | 03:46 |
winston-d | creiht: I thought you left | 03:46 |
* creiht never leaves | 03:47 | |
creiht | :) | 03:47 |
creiht | well I did leave, but was just checking back in | 03:47 |
creiht | :) | 03:47 |
winston-d | creiht: I see. | 03:48 |
winston-d | creiht: how long will you be around? i need to get a quick lunch. | 03:51 |
creiht | not much | 03:52 |
creiht | sorry | 03:52 |
*** kevnfx has quit IRC | 03:52 | |
winston-d | OK, never mind. Let me just skip this lunch. :) here's newest results. http://paste.openstack.org/show/419/ | 03:52 |
creiht | hah | 03:53 |
creiht | hrm | 03:53 |
creiht | can you paste another section of proxy logs from a bench run? | 03:54 |
winston-d | sure. all of them or just one operation PUT/GET/DEL? | 03:54 |
creiht | just a good sampling | 03:54 |
creiht | of puts | 03:54 |
winston-d | OK | 03:54 |
creiht | I imagine that if we solve the put problem, everything else will follow :) | 03:55 |
winston-d | creiht: no error this time. http://paste.openstack.org/show/435/ | 03:57 |
winston-d | the average latency for PUT seems to be ~0.45 s. So 20.2/s PUT looks reasonable? | 03:58 |
creiht | winston-d: hrm... can you post a larger cross-section? | 03:58 |
creiht | all of those were .05ish seconds | 03:58 |
winston-d | OK | 03:58 |
creiht | well if it was just a single thread, then that would make sense | 04:00 |
creiht | It should by default run with a concurrency of 10 | 04:00 |
winston-d | this is full one, 1000 PUTS http://paste.openstack.org/show/436/ | 04:01 |
creiht | cool | 04:01 |
creiht | looking... | 04:01 |
creiht | winston-d: what rev are you running again? | 04:02 |
winston-d | swift-1.1.0 | 04:02 |
creiht | and are you running swift-bench with the defaults, or are you overriding any of them? | 04:02 |
winston-d | with defaults | 04:03 |
winston-d | creiht: i've some questions about the consistency of swift. how can i know swift (data/account) is consistent? Any way to find out? | 04:05 |
creiht | just sec, I might be on to something :) | 04:06 |
winston-d | K | 04:06 |
creiht | well I thought I was... there was once a bug where swift-bench wouldn't write to multiple containers, but it is fixed in that version | 04:11 |
creiht | and it would cause poor perf numbers like you are seeing | 04:11 |
creiht | so consistency | 04:11 |
creiht | which data do you want to know is consistent? | 04:12 |
creiht | the objects, container listings, or...? | 04:12 |
winston-d | first, account | 04:13 |
winston-d | what can i expect from 'swift-account-audit' ? | 04:13 |
creiht | winston-d: and out of curiosity, if you run swift bench the same, but add a -c 20 as a command line option, and let me know if things change any? | 04:13 |
winston-d | creiht: let me try | 04:14 |
creiht | that will bump the concurrency up to 20 | 04:14 |
creiht | swift-account-audit is something that can be used to verify the consistency of the account, its containers, and objects | 04:15 |
creiht | but it is more of a one off tool | 04:15 |
creiht | or more for debugging | 04:15 |
winston-d | new results w/ -c 20: http://paste.openstack.org/show/437/ | 04:15 |
winston-d | http://paste.openstack.org/compare/437/419/ no much difference | 04:16 |
creiht | heh | 04:16 |
creiht | ok | 04:16 |
creiht | do you want to know consistency in general, accross the cluster? or for a specific account? | 04:17 |
*** ccustine has quit IRC | 04:18 | |
winston-d | creiht: you know, yesterday i destory two storage drives, and i used 'swift-auth-recreate-accounts' to recreate accounts. but never know whether it fixed the destoried drives | 04:18 |
creiht | swift-auth-recreate-accounts will let you know if it can't recreate the account | 04:19 |
creiht | you could use swift-account-audit also in that case | 04:19 |
*** kashyapc has joined #openstack | 04:20 | |
winston-d | "1 accounts, failures []" is the result of swift-recreate-accounts | 04:20 |
creiht | then it was successful | 04:20 |
creiht | if any accounts show up in [], then those failed | 04:20 |
winston-d | i see | 04:21 |
creiht | in swift, everything has 3 replicas | 04:21 |
creiht | any PUT operation (account, container, or object), the opperation only returns success if at least 2 of the replicas were written | 04:21 |
winston-d | well, i've turned off replicator of account/container/object | 04:22 |
creiht | ahh | 04:22 |
creiht | that's right, we did that for testing | 04:23 |
winston-d | this is the result of swift-account-audit http://paste.openstack.org/show/438/ | 04:23 |
creiht | I've actually never used it myself | 04:24 |
winston-d | well. :) | 04:25 |
*** ArdRigh has quit IRC | 04:25 | |
winston-d | never mind. | 04:25 |
creiht | hehe | 04:25 |
creiht | just sec | 04:25 |
creiht | trying it now :) | 04:25 |
creiht | of course it runs fine here :/ | 04:27 |
creiht | I get the same error that you do if I try an account that doesn't exist | 04:27 |
creiht | did you enter the right account string? | 04:28 |
creiht | try | 04:28 |
anticw | looks like it works here too | 04:28 |
creiht | swift-account-audit AUTH_40d3ae6f824a980cd8c01a5f93f5e447 | 04:28 |
anticw | though apparently some people have REALLY large containers | 04:28 |
creiht | hehe | 04:28 |
creiht | it can be a bit intensive for a large account | 04:29 |
winston-d | :) | 04:29 |
winston-d | http://paste.openstack.org/show/439/ new one with right account. | 04:31 |
winston-d | it says that Cntainer mismatch: 1, and missing replicas: 87 | 04:32 |
winston-d | that's fine, right? since I turned of replicator services | 04:32 |
creiht | winston-d: yup | 04:34 |
winston-d | back to perf issue, so right now we can conclude there's no perf issue with swift. but there might be some issue with swift-bench. am i right? | 04:36 |
creiht | or possibly settings | 04:37 |
creiht | winston-d: can you paste your /etc/swift/proxy-server.conf and /etc/swift/object-server.conf from one of the storage nodes? | 04:37 |
winston-d | sure. right away | 04:38 |
winston-d | http://paste.openstack.org/show/440/ | 04:39 |
*** littleidea has joined #openstack | 04:41 | |
creiht | winston-d: did you restart all the services after making the config changes to add the workers? | 04:42 |
winston-d | sure did | 04:43 |
winston-d | i can did it again right now and test to confirm | 04:43 |
winston-d | shall i turn on replicator this time? | 04:43 |
creiht | I would leave replicator off still | 04:44 |
creiht | yeah try restarting | 04:44 |
creiht | just ot be sure | 04:44 |
winston-d | hmm, didn't know what happened, but this time, back to super slow mode (PUT 2.6/s) | 04:48 |
creiht | hah | 04:51 |
*** mdomsch has joined #openstack | 04:54 | |
creiht | can you paste the proxy log? | 04:56 |
winston-d | wait a sec. i repeat the test 3 times, the result is NOT consistent | 04:59 |
winston-d | two rounds are super slow, one is fine | 04:59 |
*** littleidea has quit IRC | 04:59 | |
creiht | odd | 05:09 |
creiht | how many objects are you putting? | 05:09 |
creiht | winston-d: out of curiosity, can you run | 05:10 |
creiht | swift-ring-builder /etc/swift/object.builder | 05:10 |
winston-d | sure | 05:10 |
creiht | and all the other ring builder files | 05:10 |
creiht | and paste the output | 05:11 |
winston-d | OK. here's result for slow run and PUT log first: http://paste.openstack.org/show/441/ | 05:14 |
*** ArdRigh has joined #openstack | 05:15 | |
creiht | winston-d: yeah... that is similar to what we saw before, most calls are pretty fast, and then a couple take 5-10 seconds | 05:16 |
winston-d | creiht: http://paste.openstack.org/show/442/ ring builder | 05:17 |
creiht | winston-d: btw, if you have 6 machines, I would go ahead an make the 6th another zone | 05:19 |
creiht | instead of having 1 zone have 2 machines, while all the others just have 1 | 05:19 |
creiht | but that shouldn't have anything to do with what we are seeing | 05:19 |
winston-d | oh, ok | 05:20 |
creiht | otherwise the rings look good | 05:20 |
creiht | in general it is a good idea to have the same number of devices in each zone | 05:21 |
winston-d | is there any specific reason that swift doc recommend at least 5 zones? | 05:22 |
anticw | oh, that's documented now? neat | 05:23 |
creiht | winston-d: mainly it is best for handling failure scenarios | 05:23 |
creiht | that allows for 3 replicas plus 2 zone to use as handoff nodes in the case of failure | 05:24 |
winston-d | anticw: swift doc is way better than Nova :) | 05:24 |
anticw | winston-d: i know, they got a lot better too | 05:24 |
anticw | but swift is also somewhat more mature | 05:24 |
winston-d | creiht: what if i have 8 nodes, 10, 12? | 05:24 |
creiht | if you have only 3 zones, and a device fails, there isn't another zone that can be used to handoff to | 05:24 |
creiht | I would shoot for a multiple of zones | 05:25 |
winston-d | anticw: i heard that before. | 05:25 |
creiht | so if you are going to have 5 zones, then I would start with 5, 10, 15, etc. | 05:25 |
*** damon__ has quit IRC | 05:25 | |
winston-d | creiht: OK. understand, but now i have 6... I am likely to get another 4 later... | 05:26 |
creiht | k | 05:26 |
creiht | I would just use 5 then | 05:26 |
*** MarkAtwood has joined #openstack | 05:26 | |
creiht | and actually it would probably be a better use to make the leftover another proxy | 05:26 |
creiht | since you only have one proxy | 05:26 |
winston-d | hmm. i should seriously consider the deployment | 05:28 |
creiht | winston-d: is this just for testing, or is this also going to be production hardware? | 05:29 |
winston-d | just for testing. :) | 05:29 |
creiht | for a production setup, a lot of thought needs to be put into the deployment | 05:29 |
creiht | ok | 05:29 |
winston-d | but for performance test in future | 05:29 |
creiht | then I would recommend starting with 2 proxies and 5 storage nodes | 05:30 |
winston-d | creiht: can i also use these nodes as Nova compute nodes? LOL | 05:32 |
creiht | hah | 05:32 |
creiht | actually you probably have enough memeory/cpu to do that | 05:33 |
creiht | though you should get some more disks before you do :) | 05:33 |
winston-d | certainly | 05:34 |
creiht | devcamcar: Are you guys still looking at deplying swift alongside your nova installs? | 05:34 |
creiht | winston-d: do you have any sas drives around that you could put in the storage nodes to test? | 05:35 |
creiht | I would be curious to see if you still have the same perf problems running off the sas drives rather than the ssds | 05:35 |
winston-d | nope, only SATA interface is available | 05:35 |
creiht | or SATA | 05:35 |
winston-d | i don't have enough SATA disk right now. :* | 05:36 |
creiht | since they were ssds, I just assumed a SAS interface for some reason | 05:36 |
creiht | k | 05:36 |
winston-d | i can try that once i do | 05:36 |
creiht | winston-d: what machine are you running swift-bench from? | 05:37 |
winston-d | one of the storage nodes | 05:37 |
winston-d | the one in zone 1 | 05:37 |
creiht | k | 05:38 |
creiht | by default, swift-bench uses 0byte files, so we shouldn't be hitting any network limitations | 05:39 |
creiht | winston-d: I'll have to sleep on it some more | 05:39 |
creiht | feel free to leave any notes if you make any more progress | 05:39 |
winston-d | OK. i may skip this issue for a while and dive into Nova mess. :) | 05:39 |
winston-d | creiht: thank you so much for your help. | 05:40 |
winston-d | creiht: have a good nite. | 05:41 |
*** mdomsch has quit IRC | 05:45 | |
*** f4m8_ is now known as f4m8 | 05:49 | |
*** adiantum has quit IRC | 05:57 | |
*** kevnfx has joined #openstack | 06:00 | |
*** adiantum has joined #openstack | 06:03 | |
*** zaitcev has quit IRC | 06:18 | |
*** Ryan_Lane is now known as Ryan_Lane|sleep | 06:20 | |
*** ramkrsna has joined #openstack | 06:22 | |
*** opengeard has quit IRC | 06:35 | |
*** kevnfx has quit IRC | 06:39 | |
*** adiantum has quit IRC | 06:50 | |
*** adiantum has joined #openstack | 06:55 | |
*** opengeard has joined #openstack | 07:21 | |
*** MarkAtwood has quit IRC | 07:29 | |
*** 36DAAZJLY has joined #openstack | 07:32 | |
*** miclorb has quit IRC | 07:33 | |
ttx | sirp-: lp:~rconradharris/nova/xs-snap-return-image-id-before-snapshot is just the first step to xs-snapshots, right ? | 07:42 |
*** adiantum has quit IRC | 07:42 | |
zykes- | |/j indefero | 07:46 |
*** adiantum has joined #openstack | 07:54 | |
*** MarkAtwood has joined #openstack | 07:57 | |
*** allsystemsarego has joined #openstack | 07:58 | |
*** brd_from_italy has joined #openstack | 08:00 | |
*** befreax has joined #openstack | 08:02 | |
*** adiantum has quit IRC | 08:06 | |
*** adiantum has joined #openstack | 08:06 | |
*** MarkAtwood has quit IRC | 08:19 | |
*** adiantum has quit IRC | 08:34 | |
*** trin_cz has joined #openstack | 08:35 | |
*** arcane has quit IRC | 08:36 | |
*** adiantum has joined #openstack | 08:41 | |
*** skrusty has quit IRC | 08:44 | |
*** calavera has joined #openstack | 08:47 | |
*** Guest61782 has joined #openstack | 08:48 | |
*** Guest61782 has quit IRC | 08:50 | |
*** skrusty has joined #openstack | 08:59 | |
*** nijaba has quit IRC | 09:00 | |
*** adiantum has quit IRC | 09:02 | |
*** nijaba has joined #openstack | 09:03 | |
*** nijaba has joined #openstack | 09:03 | |
*** adiantum has joined #openstack | 09:09 | |
*** MarkAtwood has joined #openstack | 09:14 | |
*** arcane has joined #openstack | 09:19 | |
ttx | soren: we should use this to track the lp:nova/trunk PPA downloads : http://ftagada.wordpress.com/2011/01/05/ppa-stats-initial-impressions/ | 09:20 |
* ttx gets some coffee | 09:20 | |
*** fabiand_ has joined #openstack | 09:30 | |
*** adiantum has quit IRC | 09:31 | |
*** Abd4llA has joined #openstack | 09:35 | |
*** Abd4llA is now known as Guest32189 | 09:35 | |
*** adiantum has joined #openstack | 09:36 | |
*** Guest32189 has quit IRC | 09:42 | |
*** adiantum has quit IRC | 09:56 | |
*** adiantum has joined #openstack | 09:56 | |
*** arcane has quit IRC | 10:00 | |
*** arcane has joined #openstack | 10:04 | |
*** gasbakid has joined #openstack | 10:15 | |
*** irahgel has joined #openstack | 10:16 | |
*** adiantum has quit IRC | 10:22 | |
*** adiantum has joined #openstack | 10:27 | |
*** adiantum has quit IRC | 10:48 | |
*** adiantum has joined #openstack | 10:48 | |
*** trin_cz has quit IRC | 10:49 | |
soren | ttx: Yeah, I saw that. | 10:55 |
soren | ttx: I asked him about it yesterday. He's working on releasing the code. | 10:55 |
*** kashyapc has quit IRC | 11:11 | |
*** Abd4llA has joined #openstack | 11:38 | |
* soren lunches | 11:40 | |
*** MarkAtwood has quit IRC | 11:47 | |
* soren returns | 12:05 | |
*** MarkAtwood has joined #openstack | 12:05 | |
*** trin_cz has joined #openstack | 12:18 | |
sandywalsh | o/ | 12:23 |
sandywalsh | hey guys, trunk openstack api is busted until we get this bug fix in ... can someone approve please? https://code.launchpad.net/~ed-leafe/nova/internal_id_fix | 12:30 |
sandywalsh | does this need a merge prop? | 12:30 |
soren | sandywalsh: Yeah, can't merge stuff without a merge proposal. | 12:31 |
soren | I could have sworn I saw an mp for that this morning, though. | 12:31 |
dabo | sandywalsh: It got rejected because it contained some code cleanup stuff that wasn't specific to the internal_id question. That's why I pulled it. | 12:32 |
sandywalsh | hmm | 12:32 |
sandywalsh | can we rip out those other changes? | 12:32 |
sandywalsh | (I can branch it and do so if you'd like) | 12:32 |
dabo | sandywalsh: yeah, we can. It's just that I was trying to help you out yesterday by going on this tangent, and after 13 hours straight I got fed up with the nitpicking | 12:33 |
dabo | It caused me to miss the freeze for password reset, so I didn't see the point in continuing. | 12:34 |
sandywalsh | hmm, not sure where we stand now | 12:35 |
dabo | I took my password reset branch, which contained the new stuff along with a lot of code cleanup around those changes, and in order to help you with the internal_id problems introduced by eday's change, I removed the password-specific stuff | 12:36 |
dabo | that apparently wasn't good enough, and there are only so many hours in a day | 12:36 |
soren | dabo: If you have a branch you'd like to land, feel free to propose it. | 12:36 |
sandywalsh | but these are two difference branches, correct? | 12:36 |
soren | dabo: Any core-dev can approve exceptions to the freeze. | 12:37 |
dabo | sandywalsh: yes, but you understand that the fix branch was derived from the password branch, right? It wasn't created fresh from trunk | 12:37 |
sandywalsh | password-reset and instance-id? Password reset used to have instance-id in it, but you pulled it out. And password reset was merged with trunk recently enought, no? | 12:38 |
sandywalsh | *enough | 12:38 |
dabo | soren: seems silly that exceptions can be made after the deadline, but not before. | 12:38 |
soren | dabo: Eh? | 12:38 |
sandywalsh | let's just get it done :) | 12:38 |
soren | dabo: Before deadline, there's no point in making exceptions, since you don't need any to begin with. | 12:39 |
dabo | sandywalsh: no, password reset was almost ready to propose. I stopped work on that to make the internal_id only fix | 12:39 |
sandywalsh | what can I do to the branch to get things on track? | 12:39 |
*** arcane has quit IRC | 12:39 | |
sandywalsh | soren, would it be easier just to push on password-reset and kill two birds? | 12:39 |
dabo | soren: sorry, I've been doing agile too long to adapt my thinking to this style of development. | 12:39 |
soren | sandywalsh: The internal-id-fix should be a much easier review. | 12:40 |
sandywalsh | I agree | 12:40 |
dabo | sandywalsh: it would have been done much earlier yesterday if I hadn't gone off on a tangent to create a "pure" patch | 12:40 |
soren | I don't really understand the problem here? There's a branch that fixes the internal id thing, and there's a branch that implement reset password. | 12:40 |
soren | Right? | 12:41 |
*** ctennis has quit IRC | 12:41 | |
dabo | soren: no. There was the branch that implements password stuff | 12:41 |
soren | Right.. | 12:41 |
dabo | then yesterday the internal_id problem came up | 12:41 |
soren | ...right.. | 12:41 |
dabo | I fixed it in my branch and was working to finish the password branch | 12:41 |
soren | Ok. | 12:42 |
dabo | when the bug was holding up sandywalsh. | 12:42 |
soren | Right. | 12:42 |
sandywalsh | (and trunk was broken) | 12:42 |
soren | Right, right. | 12:42 |
dabo | so instead of finishing password, I tried to create a branch from password minus the password-specific stuff | 12:42 |
dabo | so that sandywalsh could more easily merge into his stuff | 12:43 |
sandywalsh | did you do think by branching password-reset? | 12:43 |
sandywalsh | *this | 12:43 |
soren | dabo: Right. | 12:43 |
sandywalsh | or by ripping out password-reset branch | 12:43 |
dabo | when it was proposed for merge, it was rejected because it also contained some code cleanup that wasn't specific to the internal_id bug | 12:43 |
dabo | code that was going to be in the password branch anyway | 12:44 |
soren | Well, not technically "Rejected", but eday votes "Needs fixing" on it, yes. | 12:44 |
dabo | we were trying to beat the deadline and were rushing to get stuff done in time. | 12:44 |
soren | s/votes/voted/ | 12:44 |
dabo | vishy also objected here on irc | 12:45 |
soren | Ok. | 12:45 |
soren | Ok, so those were the problems /yesterday/. I still don't really understand what the problem is today. | 12:45 |
dabo | so now that the deadline's passed, I'm going to start over and do it right | 12:45 |
soren | Why not just let sandywalsh remove the "superfluous | 12:45 |
soren | " cleanups and propose a clean internal-id fix? | 12:45 |
dabo | soren: because that's what I'm doing already | 12:46 |
soren | Ah. | 12:46 |
soren | So there's no problem? | 12:46 |
soren | Or? | 12:46 |
soren | If there's problems I'd like to help solve them, that's all. | 12:47 |
soren | Not trying to stir up new ones :) | 12:47 |
*** zykes- has quit IRC | 12:47 | |
*** nijaba has quit IRC | 12:47 | |
*** Xenith has quit IRC | 12:47 | |
dabo | soren: when a bug is introduced and has broken trunk, and a fix is being prepared, it seems silly to argue over trivial name changes as a reason to hold up the fix | 12:48 |
dabo | e.g., compute_api.get() vs. compute_api.get_instance() | 12:48 |
soren | dabo: You're arguing with the wrong person :) | 12:48 |
dabo | well, you asked if there were problems... :) | 12:49 |
soren | True :) | 12:49 |
soren | About the password reset stuff... There's a blueprint for it, it was approved, etc. Feel free to propose it for merge. | 12:49 |
*** nijaba has joined #openstack | 12:49 | |
soren | dabo: FeatureFreeze is *next* Thursday. | 12:50 |
sandywalsh | so, is there anything I can do to help here? | 12:50 |
dabo | soren: i was planning on it. This stuff about exceptions to the freeze was news to me when I read ttx's email this morning | 12:50 |
sandywalsh | dabo, I can review password-reset and instance-id once there are merge-props | 12:50 |
dabo | sandywalsh: just be patient. I need to be in the code window instead of the irc window | 12:50 |
soren | :) | 12:50 |
sandywalsh | soren, do bug fixes require merge-props? I assume yes? | 12:51 |
dabo | sandywalsh: how else would they get merged? | 12:51 |
*** Xenith has joined #openstack | 12:51 | |
sandywalsh | well, it gets into that blurry area of merge-prop freeze | 12:51 |
sandywalsh | or is it just blueprint merge-prop freeze? | 12:52 |
*** rogue780 has joined #openstack | 12:52 | |
sandywalsh | and dabo, sorry if I seemed impatient, I wasn't aware of the gory details of your events yesterday. :) | 12:53 |
dabo | sandywalsh: np | 12:53 |
dabo | I'm just frustrated at unfamiliar processes über alles | 12:54 |
soren | sandywalsh: You can always propose bug fixes for merge. | 12:55 |
soren | sandywalsh: All the way up until release. | 12:55 |
soren | sandywalsh: ...and get them merged, that is. | 12:55 |
sandywalsh | soren, so the merge-prop freeze is just regarding blueprints, correct? | 12:56 |
soren | sandywalsh: The freezes only really pertain to new features or stuff that alters behaviour (other than fixing bugs). | 12:56 |
sandywalsh | soren, thanks for the clarification. | 12:56 |
soren | sandywalsh: Well, yes. | 12:56 |
soren | sandywalsh: It's for everything that ought to have a corresponding blueprint. | 12:56 |
sandywalsh | right | 12:57 |
soren | sandywalsh: If a snazzy, new feature came along that was well written, had good tests etc., but didn't have a blueprint, I wouldn't mind approving it in spite of the lack of blueprint. | 12:57 |
sandywalsh | soren, and thus my "blurry area" comment above and, I think, the source of dabo's frustration. | 12:58 |
*** ctennis has joined #openstack | 12:58 | |
sandywalsh | soren, but I understand your rationale | 12:58 |
dabo | sandywalsh: exactly. Why have a freeze deadline if you can get around it | 12:58 |
*** henrichrubin has joined #openstack | 13:01 | |
*** MarkAtwood has quit IRC | 13:01 | |
*** anotherjesse has joined #openstack | 13:02 | |
anotherjesse | anyone know why /var/log/messages and /var/log/user.log would be size 0 on a debian/ubuntu box? I've rebooted | 13:03 |
*** henrichrubin has quit IRC | 13:06 | |
*** calavera has quit IRC | 13:07 | |
uvirtbot | New bug: #699814 in nova "nova-compute tries to connect to the wrong database at startup" [Undecided,New] https://launchpad.net/bugs/699814 | 13:11 |
*** adiantum has quit IRC | 13:14 | |
ttx | dabo: I thought my email explained it clearly... but apparently not | 13:14 |
dabo | ttx: today's email did. Up until today I had no idea that the deadline was fluid. | 13:15 |
ttx | dabo: this particular freeze is really about organizing reviews in the last week before FF | 13:16 |
ttx | dabo: for branches proposed at least one week before, we can ensure review. For branches proposed closer to FF, we can't promise anything | 13:16 |
ttx | so the freeze says "if you want to get your branch merged, propose it before BMPFreeze, which is one week before" | 13:17 |
dabo | ttx: thanks. I said that your email today explained things clearly | 13:17 |
ttx | the name "freeze" is probably not the best chosen one. It's really more a deadline :) | 13:17 |
ttx | dabo: I explained it during Tuesday's meeting as well, fwiw | 13:17 |
ttx | Note that FeatureFreeze is much stricter. | 13:18 |
dabo | sandywalsh: looks like someone beat me to it. The latest trunk has no use of internal_id | 13:18 |
*** WonTu has joined #openstack | 13:18 | |
*** WonTu has left #openstack | 13:19 | |
sandywalsh | haha | 13:19 |
* sandywalsh mergin | 13:19 | |
*** hggdh has quit IRC | 13:19 | |
dabo | well, this was an extremely productive use of the last 4 hours of my development time. | 13:20 |
* ttx wonders how that confusion happened. Usually you have a bug and only one person being assigned to it | 13:20 | |
*** westmaas has joined #openstack | 13:21 | |
sandywalsh | ttx perhaps that's where the problem stemmed from, I don't think there was bug issued for instance_id or the bugs that followed. | 13:22 |
sandywalsh | but, as is often the case, I'm probably wrong. | 13:22 |
ttx | sandywalsh: yep -- not filing a bug can be seen as a gain of time, but sometimes triggers catastrophic loss of time | 13:23 |
dabo | the problem stemmed from the urgency of trying to get everything done by the end of yesterday | 13:23 |
dabo | I had already fixed it in my branch and moved on | 13:23 |
ttx | dabo: I'm sorry about that. I'll try to communicate better next time | 13:24 |
ttx | Though all Freezes and Exceptions are already quite documented | 13:25 |
*** hggdh has joined #openstack | 13:25 | |
sandywalsh | Just tried pycharm ... it's actually quite handy for browsing all the code if nothing else. | 13:28 |
soren | anotherjesse: Maybe it just got rotated? | 13:34 |
*** nelson__ has quit IRC | 13:49 | |
*** Abd4llA has quit IRC | 13:49 | |
*** nelson__ has joined #openstack | 13:49 | |
*** ramkrsna has quit IRC | 13:53 | |
*** 36DAAZJLY is now known as guigui | 13:54 | |
*** westmaas has quit IRC | 13:57 | |
dabo | For our marketing team: http://www.dilbert.com/strips/comic/2011-01-07/ | 14:03 |
rogue780 | Does anyone know if the blog post by Anne Gentle on 30 November regarding consolidated docs and tutorials is still fairly accurate or not? | 14:04 |
ttx | annegentle: ^ | 14:05 |
annegentle | rogue780: the blog post with the goals for Bexar? | 14:12 |
rogue780 | specifically, 1. Docs site – OpenStack needs a central docs.openstack.org site that curates the content from various other sources and gives a good user experience upon landing on it. My goal is to implement this in time for Bexar (February). | 14:13 |
rogue780 | and 5. Tutorials – Now that we have virtual boxes and stack on a stick in progress, we need tutorials that are meaningful and simple enough to step through while still giving an impressive demonstration of the power of OpenStack clouds. | 14:13 |
annegentle | rogue780: yes, for 1. I have docs.openstack.org set up and am working on seeding content. | 14:13 |
*** gondoi has joined #openstack | 14:13 | |
annegentle | rogue780: for 5. I had a volunteer want to write a tutorial for LAMP stack but haven't heard from him for a while. We still have need there. | 14:14 |
rogue780 | cool. we're experimenting with openstack here, but haven't ever had any experience with cloud computing before, so we're finding the learning curve to be rather steep. a consolidated documentation site and a few useful tutorials would be fantastic for people like us who have no cloud experience | 14:15 |
rogue780 | dabo, I just printed that comic off this morning and put it up here at work. It perfectly describes our management at the moment | 14:16 |
*** ppetraki has joined #openstack | 14:17 | |
rogue780 | bb in 10 | 14:18 |
ttx | lol | 14:19 |
*** allsystemsarego_ has joined #openstack | 14:19 | |
annegentle | rogue780: totally grok that sensation "what the heck can this do?" | 14:20 |
annegentle | rogue780: working on it furiously :) it's a ton of work so collaborators, even newbies, are always welcome. Start writing on the wiki for an easy start, esp. if you're the type who learns by writing down. | 14:21 |
*** allsystemsarego has quit IRC | 14:22 | |
*** allsystemsarego_ has quit IRC | 14:24 | |
rogue780 | will do. | 14:25 |
*** matclayton has joined #openstack | 14:25 | |
tr3buchet | o/ | 14:26 |
dabo | hey trey | 14:28 |
*** lvaughn has joined #openstack | 14:28 | |
sandywalsh | trunk still busted ... investigating | 14:32 |
*** rogue780 has quit IRC | 14:32 | |
sandywalsh | internal_id http://paste.openstack.org/show/443/ | 14:35 |
*** mdomsch has joined #openstack | 14:37 | |
*** dendrobates is now known as dendro-afk | 14:37 | |
anotherjesse | any rackers around who know about performance of slicehost - I run userscripts.org and the server needs replace - want to chat about net/disk i/o | 14:37 |
*** rogue780 has joined #openstack | 14:37 | |
dabo | sandywalsh: do you have the latest? My n/c/api.py looks like this: http://paste.openstack.org/show/444/ | 14:38 |
sandywalsh | hmm, I just merged from trunk | 14:38 |
sandywalsh | but will change just in case | 14:38 |
*** dendro-afk is now known as dendrobates | 14:38 | |
sandywalsh | seems like a bad idea exposing database ID's to the outside world | 14:40 |
dabo | sandywalsh: there's an old saying: intelligent keys aren't | 14:40 |
dabo | PKs in a database should be used for nothing other than relational purposes | 14:40 |
sandywalsh | so why are we exposing instance id ? | 14:41 |
dabo | because having one ID column is cleaner than two! | 14:41 |
*** jdarcy has joined #openstack | 14:50 | |
*** comstud has quit IRC | 14:51 | |
sandywalsh | trunk is going again \o/ | 14:51 |
*** vishy has quit IRC | 14:51 | |
*** sleepson- has quit IRC | 14:51 | |
*** devcamcar has quit IRC | 14:51 | |
*** vishy has joined #openstack | 14:52 | |
*** devcamcar has joined #openstack | 14:52 | |
*** ChrisAM has quit IRC | 14:52 | |
*** sleepsonthefloor has joined #openstack | 14:53 | |
*** ChrisAM1 has joined #openstack | 14:54 | |
* sandywalsh finds it odd his merge with trunk didn't grab that change | 14:54 | |
* sandywalsh shrugs and moves on | 14:54 | |
notmyname | 55555555 | 14:58 |
*** ChrisAM1 is now known as ChrisAM | 14:59 | |
notmyname | aaaaaaawwwwweeeeeeeeeeeeeeeeeeeeyyyyyyyyyyybbbbbbbb55555ppppppppppp88888uu4yt5tuyy4ygey4yftyfghgbedbgehdytdgtewdt3rtedtdr3tdrtd3tdtd | 15:01 |
notmyname | augh! sorry. 2 year-old attacking | 15:01 |
soren | Oh, I thought it was your password. | 15:02 |
sandywalsh | notmyname, thought you were having a stroke | 15:02 |
*** f4m8 is now known as f4m8_ | 15:04 | |
xtoddx | soren, vishy: i merged newlog2 to trunk, but should be good for merge again. had to remove references to internal_id and swap in nova.log for logging in a few places. | 15:04 |
notmyname | soren: hunter2 is my pw | 15:04 |
*** ArdRigh has quit IRC | 15:04 | |
*** spectorclan has joined #openstack | 15:07 | |
soren | notmyname: Mine too! | 15:08 |
soren | xtoddx: I'm about to leave for now. Want to have a quick chat about the version branch? | 15:08 |
xtoddx | soren: sure, let me pull up your branch real quick | 15:09 |
soren | Cool. | 15:09 |
sandywalsh | dabo, are you going to reissue your merge prop for password reset? | 15:11 |
dabo | sandywalsh: I never proposed it. I was too busy trying to fix the internal_id stuff | 15:11 |
xtoddx | soren: so yours isolates the code generation to vcsversion, and the rest stays the same? | 15:11 |
dabo | working on it today. One test still breaks. | 15:12 |
sandywalsh | dabo, thx | 15:12 |
soren | xtoddx: I leave all code generation to bzr. | 15:12 |
xtoddx | soren: yea, i like that better | 15:12 |
soren | xtoddx: That's the primary change, really. | 15:12 |
xtoddx | soren: want to just file a merge prop for yours? | 15:13 |
soren | xtoddx: Sure, I could do that. | 15:13 |
xtoddx | soren: i'll mark mine as rejected | 15:13 |
soren | xtoddx: Cool beans. | 15:13 |
annegentle | soren, xtoddx, how do you retrieve the version info after installing? | 15:14 |
xtoddx | annegentle: import nova.version ; print nova.version.version_string() | 15:15 |
rogue780 | openstack@openstack:~$ euca-authorize default -P tcp -p 22 -s 0.0.0.0/0 | 15:15 |
rogue780 | OperationalError: (OperationalError) attempt to write a readonly database u'INSERT INTO security_groups (created_at, updated_at, deleted_at, deleted, name, description, user_id, project_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)' ('2011-01-07 15:14:47.529301', None, None, False, 'default', 'default', u'ttx', u'myproject') | 15:15 |
annegentle | xtoddx: great, thanks | 15:15 |
soren | xtoddx: https://code.launchpad.net/~soren/nova/version/+merge/45517 | 15:19 |
* soren has to run. May be back this evening. | 15:19 | |
*** mdomsch has quit IRC | 15:29 | |
*** glenc has joined #openstack | 15:31 | |
*** befreax has quit IRC | 15:31 | |
*** glenc_ has quit IRC | 15:32 | |
*** kevnfx has joined #openstack | 15:43 | |
*** arthurc has joined #openstack | 15:45 | |
*** kevnfx has quit IRC | 15:46 | |
*** dfg_ has joined #openstack | 15:47 | |
*** dragondm has joined #openstack | 15:55 | |
*** dragondm has joined #openstack | 15:56 | |
uvirtbot | New bug: #699878 in nova "Compute worker silently fails when XS host runs out of space" [Undecided,New] https://launchpad.net/bugs/699878 | 16:02 |
*** dubsquared has joined #openstack | 16:14 | |
*** enigma has joined #openstack | 16:14 | |
*** kevnfx has joined #openstack | 16:19 | |
*** guigui has quit IRC | 16:20 | |
*** piken has joined #openstack | 16:26 | |
piken | Hey all. | 16:26 |
piken | So I am thinking of trying something odd with openstack and wanted to get a few pointers. | 16:26 |
piken | We have a non-cloud system that we need to provision software to in a bsd jail style environment that we spawn on demand. | 16:27 |
piken | I have written a script engine in Python to do the provisioning and it works well. | 16:27 |
*** dubsquared has quit IRC | 16:27 | |
piken | I was wondering what the possibility of using the nova components and message queue to do the calls to the provisioning system for it. | 16:27 |
piken | ie a provision that doesn't hit nova-network or nova-compute, but my internal system instead. | 16:28 |
piken | would that be feasable? | 16:28 |
*** dubsquared has joined #openstack | 16:34 | |
dubsquared | morning/afternoon #openstack! | 16:34 |
*** jaypipes has joined #openstack | 16:34 | |
dubsquared | anyone privy of nova-objectstore getting fixed in the packages? | 16:35 |
vishy | piken: seems like there was a blueprint for lxc containers that may do exactly that. | 16:38 |
jaypipes | hello again, #openstack. :) | 16:38 |
KnightHacker | gholt: I just submitted my patch. Should I "propose for merging"? | 16:38 |
vishy | piken: as i understand it lxc is roughly equivalent to a bsd jail and is supported by libvirt. | 16:40 |
vishy | piken: if you want to implement your scripting version, it might be good to implement it as a new compute driver with possibly a new network manager as well | 16:41 |
vishy | piken: blueprint is here https://blueprints.launchpad.net/nova/+spec/bexar-nova-containers looks like it hasn't been started yet | 16:41 |
*** Ryan_Lane|sleep is now known as Ryan_Lane | 16:42 | |
*** rlucio has joined #openstack | 16:44 | |
*** rlucio has quit IRC | 16:45 | |
*** troytoman has joined #openstack | 16:52 | |
dabo | what's the deal with the copyright comments at the top of code files? Do we change 'em to 2010-2011, or just 2011, or leave 'em as 2010? | 16:58 |
ttx | hey jay, nice vacation ? | 16:58 |
ttx | jaypipes: ^ | 16:58 |
ttx | jaypipes: I marked i18n-support and image-service-use-glance-clients "slow progress" because I assumed they still needed some code merged. Feel free to correct me | 16:59 |
jk0 | dabo: I think annegentle started changing them to 2010-2011 in her docs | 17:00 |
dabo | but if I'm submitting a new script... do I make them 2011? | 17:00 |
*** maplebed has joined #openstack | 17:00 | |
jaypipes | ttx: yup, that's correct | 17:01 |
jaypipes | ttx: and, yes, great vacation thx :) | 17:01 |
dubsquared | new nova install (using packages), using UEC ubuntu image…following errors —> http://paste.openstack.org/show/445/ http://paste.openstack.org/show/446/ | 17:02 |
rogue780 | the instance I'm trying to launch is stuck on pending | 17:03 |
*** ccustine has joined #openstack | 17:03 | |
uvirtbot | New bug: #699910 in nova "Nova RPC layer silently swallows exceptions" [Undecided,In progress] https://launchpad.net/bugs/699910 | 17:06 |
creiht | nelson__: btw, in order to contribute code to openstack, we need you to sign a CLA, as described here: http://wiki.openstack.org/HowToContribute | 17:07 |
nelson__ | already done. | 17:07 |
creiht | cool | 17:07 |
creiht | did you add yourself here: http://wiki.openstack.org/Contributors | 17:07 |
creiht | ? | 17:07 |
vishy | xtoddx: pep8 on newlog2 | 17:08 |
*** brd_from_italy has quit IRC | 17:09 | |
uvirtbot | New bug: #699912 in nova "When failing to connect to a data store, Nova doesn't log which data store it tried to connect to" [Undecided,In progress] https://launchpad.net/bugs/699912 | 17:12 |
nelson__ | yes, I added myself seconds ago. | 17:16 |
creiht | nelson__: awesome... thanks! | 17:17 |
creiht | and thanks for your contribution | 17:17 |
*** glenc has left #openstack | 17:22 | |
* annegentle does a docs happy dance around nelson__ | 17:22 | |
nelson__ | :) | 17:22 |
*** glenc has joined #openstack | 17:22 | |
*** kashyapc has joined #openstack | 17:23 | |
*** dirakx has quit IRC | 17:23 | |
jaypipes | gah, holy full inbox Batman :( | 17:24 |
* jaypipes can't imagine how a man or woman coming back from maternity/paternity leave ever gets back into the swing of work... | 17:25 | |
eday | jaypipes: welcome back! | 17:25 |
jaypipes | eday: thx mate! :) | 17:26 |
jaypipes | eday: how was your holiday? | 17:26 |
eday | jaypipes: pretty uneventful, which is good in it's own way :) | 17:26 |
jaypipes | eday: indeed | 17:26 |
eday | glad to be back in the cold weather? | 17:26 |
jaypipes | eday: was good to see the "kids". I missed my puglet. :) | 17:26 |
eday | hehe | 17:27 |
jaypipes | eday: heh, yeah, stepping off the plane in Columbus to 20 degrees was, well, ass-baggery. | 17:27 |
eday | nice | 17:27 |
*** dragondm has quit IRC | 17:28 | |
*** kashyapc has quit IRC | 17:29 | |
*** Abd4llA has joined #openstack | 17:38 | |
*** Abd4llA is now known as Guest38079 | 17:38 | |
dabo | Just pushed the merge prop for the password reset blueprint: https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 | 17:39 |
*** befreax has joined #openstack | 17:39 | |
*** dragondm has joined #openstack | 17:43 | |
spectorclan | jaypipes: I assume it was GREY as well in Columbus | 17:43 |
jaypipes | spectorclan: yup! | 17:45 |
jaypipes | dabo: well played, Mr Leafe. | 17:45 |
jaypipes | dabo: spoken with Sean Connery accent... | 17:45 |
dabo | jaypipes: how so? | 17:45 |
*** ccustine has quit IRC | 17:46 | |
jaypipes | dabo: xs-password-reset | 17:46 |
*** brd_from_italy has joined #openstack | 17:47 | |
jaypipes | dabo: I was attempting to say nice work on the password reset stuff :) | 17:47 |
dabo | jaypipes: ah. Thought you were commenting on my submitting the day after the freeze. :) | 17:48 |
jaypipes | dabo: heh, no, I will let ttx fret about such things ;) | 17:48 |
dubsquared | lol | 17:51 |
*** leted has joined #openstack | 17:52 | |
*** joearnold has joined #openstack | 17:53 | |
*** Guest38079 has quit IRC | 17:55 | |
*** comstud has joined #openstack | 17:58 | |
*** ChanServ sets mode: +v comstud | 17:58 | |
*** jdurgin has joined #openstack | 17:58 | |
*** arthurc has quit IRC | 18:02 | |
uvirtbot | New bug: #699929 in nova "All the nova- services show up as [?] with services --status-all" [Undecided,New] https://launchpad.net/bugs/699929 | 18:02 |
*** fabiand_ has left #openstack | 18:03 | |
*** spsneo has joined #openstack | 18:04 | |
spsneo | jaypipes: ping | 18:04 |
*** ccustine has joined #openstack | 18:09 | |
*** trin_cz has quit IRC | 18:09 | |
jaypipes | spsneo: well, hi there Sid :) | 18:10 |
spsneo | jaypipes: I was just reading about nova and swift | 18:10 |
spsneo | both of them sound interesting | 18:10 |
spsneo | though I think I should start with nova | 18:11 |
spsneo | I just signed the CLA | 18:11 |
spsneo | jaypipes: what are you working on currently? | 18:12 |
jaypipes | spsneo: awesome. let us know when you have questions on stuff as you're reading through the code | 18:12 |
jaypipes | spsneo: I work on Glance and a bit on Nova right now. | 18:12 |
spsneo | jaypipes: what's glance? | 18:13 |
spsneo | jaypipes: I could only see nova and swift | 18:13 |
spsneo | jaypipes: so should I start with bugs? | 18:13 |
jaypipes | spsneo: it is the image registry and delivery server (glance.openstack.org) | 18:13 |
spsneo | jaypipes: ok | 18:13 |
jaypipes | spsneo: you can, sure, or documentation would also be good... | 18:13 |
spsneo | jaypipes: where's the documentation? | 18:13 |
spsneo | jaypipes: ok got it | 18:14 |
jaypipes | spsneo: google is your friend ;) | 18:14 |
spsneo | jaypipes: :D | 18:14 |
*** hash9 has joined #openstack | 18:14 | |
*** daleolds has joined #openstack | 18:15 | |
*** dirakx has joined #openstack | 18:20 | |
*** hadrian has joined #openstack | 18:20 | |
*** hash9 has quit IRC | 18:22 | |
Ryan_Lane | how do the floating IPs work? I added a single IP using "nova-manage floating create <ipaddress> <computenode>", but when I go to allocate it, I get a "NoMoreAddresses" error | 18:22 |
*** kevnfx has quit IRC | 18:31 | |
*** Jordandev has joined #openstack | 18:34 | |
Ryan_Lane | ah. I need to use the network-node as a host, not the compute-node :) | 18:37 |
openstackhudson | Project nova build #366: SUCCESS in 1 min 17 sec: http://hudson.openstack.org/job/nova/366/ | 18:39 |
*** matclayton has left #openstack | 18:43 | |
openstackhudson | Project nova build #367: SUCCESS in 1 min 18 sec: http://hudson.openstack.org/job/nova/367/ | 18:44 |
*** soliloquery has joined #openstack | 18:44 | |
*** MarkAtwood has joined #openstack | 18:49 | |
sandywalsh | Ryan_Lane, I think I was talking to you yesterday about the mapping json file in the api-parity branch? | 18:50 |
sandywalsh | Ryan_Lane, we were talking about 2-way functions, yes? | 18:50 |
Ryan_Lane | sandywalsh: we were talking about display_name in that branch :) | 18:51 |
sandywalsh | Ryan_Lane, ah, rats ... sorry | 18:51 |
Ryan_Lane | I do remember you having that conversation with someone | 18:51 |
* sandywalsh ponders | 18:51 | |
Ryan_Lane | was it eday? | 18:51 |
sandywalsh | could have been. eday did we talk about two-way functions for api-parity yesterday? | 18:51 |
eday | sandywalsh: I was talking to you about 2-way function for those ids | 18:53 |
sandywalsh | eday, phew | 18:53 |
sandywalsh | eday, so one proposal I received was utf-8 -> bigint | 18:53 |
sandywalsh | eday, still makes for BUN (big ugly number) | 18:53 |
sandywalsh | eday, but, it's reversible | 18:54 |
sandywalsh | eday, personally, I think for the short term we live with it. | 18:55 |
sandywalsh | eday, and let our hatred of it grow until we can contain it no longer | 18:56 |
vishy | who is mdragon on here? | 18:56 |
sandywalsh | Monsyne Dragon ... ozone dev | 18:57 |
sandywalsh | working on the xs-console bp | 18:57 |
vishy | sandywalsh: is he on irc? | 18:57 |
sandywalsh | vishy, yes, but I think he's at lunch right now (SAT) | 18:57 |
vishy | sandywalsh: I have a question about the way xen consoles work | 18:57 |
sandywalsh | vishy, I just did a review, I can take a stab at it? | 18:58 |
iRTermite | vishy: yea, he's not around. did you want me to go see if he's here? | 18:58 |
vishy | sandywalsh: does the console proxy have to run on the same host as the instance? | 18:58 |
vishy | sandywalsh: or does xen have some proxying magic that it does under the covers | 18:59 |
sandywalsh | vishy, heh, that was one of my questions too. | 18:59 |
sandywalsh | vishy, sorry, don't have an answer, but my gut says it runs on the same server as the console application | 18:59 |
vishy | sandywalsh: because for ajaxconsole we had to create a little proxy to send traffic from the api-server to the compute host | 18:59 |
vishy | sandywalsh: in most deployments the compute hosts aren't publicly accessible | 19:00 |
sandywalsh | vishy, right ... the api server seems to be the right place for it | 19:00 |
vishy | sandywalsh: so we will need something similar if xen isn't doing it magically | 19:00 |
sandywalsh | vishy, so you may need two proxies? dom0 -> api proxy : api proxy -> public ? | 19:01 |
vishy | something like that | 19:01 |
*** leted has quit IRC | 19:02 | |
*** irahgel has left #openstack | 19:02 | |
vishy | you can see how anthony addressed it in the ajaxconsole branch | 19:02 |
sandywalsh | vishy, hmm, the instance listens for vnc connections, so wouldn't just one proxy work? | 19:02 |
vishy | the instance itself? | 19:02 |
vishy | you mean the hypervisor? | 19:02 |
sandywalsh | ah, no, you're right. the hypervisor runs the vnc service | 19:03 |
vishy | just need a proxy from public into host port that the hypervisor is running on | 19:03 |
sandywalsh | but still, it listens for connections, so wouldn't one proxy work? | 19:03 |
sandywalsh | yes | 19:03 |
vishy | iRTermite: if you see him, ask him to read through this scrollback and we can discuss | 19:04 |
sandywalsh | vishy, dragon is here now | 19:04 |
dragondm | ya | 19:04 |
sandywalsh | dragondm, do you have the thread ^^ | 19:04 |
dragondm | ya | 19:04 |
* iRTermite bows out and goes back to eating | 19:05 | |
dragondm | sandywalsh: about the driver class, yes that is the base class for the console drivers | 19:08 |
dragondm | (driver.ConsoleProxy) | 19:08 |
eday | sandywalsh: thats fine, it's just unusable for anything else besides testing/small installation purposes :) | 19:12 |
sandywalsh | dragondm, I guess I getting confused by the term "console" when it's really a "console proxy". Driver doesn't really match either. (more questions in the merge comments) | 19:14 |
dragondm | yah, I am typing up a comment now in reply ... | 19:14 |
sandywalsh | eday, yeah. I just figured tweaking objectstore at this stage was not worthwhile. | 19:15 |
*** trin_cz has joined #openstack | 19:17 | |
sandywalsh | so vishy, dragondm will answer the "where does the proxy run" question in the merge prop comments. https://code.launchpad.net/~mdragon/nova/xs-console/+merge/45324 | 19:17 |
vishy | sandywalsh: sounds good | 19:18 |
*** hadrian has quit IRC | 19:18 | |
dragondm | vishy: what do you need to know? The was rackspace runs things w/ xenserver is thus: | 19:19 |
dragondm | We have separate console hosts (which are setup for public access) For xenserver there is a proxy daemon calles xvp (Xenserver Vnc Proxy) which sets up the password access, and can multiplex vnc connection if the client supports that. | 19:20 |
vishy | dragondm: I'm about to head out for lunch. I'm just trying to figure out how a user connects to a console that is running on a compute host when the compute host is generally not publicly accessible | 19:21 |
dragondm | it's thru the console host, which is a separate node | 19:21 |
vishy | dragondm: ah so xvp is is a vnc proxy | 19:21 |
dragondm | yup | 19:21 |
vishy | does it work with non xen vnc consoles? | 19:21 |
vishy | dragondm: ok makes sense to me. | 19:22 |
dragondm | btw, my xs-console stuff is designed to be fairly flexible. You could use another vnc proxy w/ say kvm, if you added a different driver | 19:22 |
jk0 | for xen classic we do something similar, but use ajaxterm instead | 19:22 |
dragondm | yup | 19:22 |
dragondm | sandywalsh: did you see the comment I added? | 19:23 |
sandywalsh | dragondm, doing so now | 19:23 |
*** jc_smith has joined #openstack | 19:23 | |
sandywalsh | dragondm, so what I'm proposing is that driver.py be renamed to proxy.py, since that's what it is. | 19:24 |
*** rlucio has joined #openstack | 19:24 | |
sandywalsh | dragondm, console is implied by the namespace | 19:25 |
dragondm | vishy: are you asking about xvp working w/ non xen? I don't think so, it talks to the xenserver via xenapi. | 19:25 |
dragondm | sandywalsh: ?? er actually it's the driver. | 19:25 |
sandywalsh | dragondm, re: 574 you can use stubout to achieve the same results. no magic flags required | 19:25 |
dragondm | stubout? | 19:26 |
dragondm | sandywalsh: actually I suppose I might rename the ConsoleProxy class to DriverBase or somesuch. | 19:27 |
sandywalsh | dragondm, look at nova/tests/api/openstack/test_servers | 19:27 |
sandywalsh | dragondm, I think I'm getting confused with the term driver in this context | 19:27 |
sandywalsh | dragondm, what is it a driver of? console proxies? | 19:27 |
*** hadrian has joined #openstack | 19:28 | |
dragondm | it's the driver for the manager class. Like the xen_conn for libvirt is the driver for the compute manager | 19:28 |
dragondm | er like xen_conn or libvirt_conn ... | 19:28 |
sandywalsh | yes, but it represents a proxy entry in the xvp daemon | 19:30 |
dragondm | to be specific it's the base class for the driver, of which there is 1 currently (XVPConsoleProxy) | 19:30 |
sandywalsh | yes, I've followed that. The naming was confusing (but something I can easily get over) | 19:31 |
dragondm | ?? I'm confused... represents the proxy entry? | 19:31 |
sandywalsh | ok, so ... the manager is a factory of drivers. | 19:32 |
dragondm | I can rename that class to something like DriverBase or somesuch | 19:32 |
sandywalsh | drivers are facades over proxy entries (which get handed down to xvp) and go in the db. | 19:33 |
*** fabiand_ has joined #openstack | 19:33 | |
sandywalsh | there are different drivers (ideally) for different console mechanisms | 19:33 |
*** rogue780 has quit IRC | 19:33 | |
dragondm | yes there are different drivers for different mechanisims | 19:33 |
sandywalsh | so my confusion was the filename didn't match the class. | 19:33 |
dragondm | ah | 19:34 |
sandywalsh | either the filename was wrong or the class was wrong | 19:34 |
sandywalsh | at first I was going to suggest the class being ConsoleDriver | 19:34 |
dragondm | I was following the example of the other workers they have standard module names (manager/driver/api) | 19:34 |
sandywalsh | but then it seemed that manager was a factory for proxy entries, so the filename was wrong. | 19:34 |
dragondm | no the driver is the driver, not a factory. | 19:35 |
sandywalsh | right, the manager is the factory of drivers | 19:35 |
dragondm | I can rename that class | 19:35 |
sandywalsh | anywho ... it cause a schism in my brain. So I asked :) | 19:36 |
dragondm | ok | 19:36 |
sandywalsh | :) | 19:36 |
sandywalsh | so, the Manager is instantiated by the service, correct? | 19:37 |
dragondm | yup | 19:37 |
sandywalsh | ok, so that answers my other question. :) | 19:37 |
sandywalsh | and the xvp daemon runs, where? api server or dom0? | 19:38 |
dragondm | neither. it runs on a separate console server | 19:38 |
sandywalsh | so how do public requests get to it? | 19:38 |
dragondm | which is where the nova-console sevice will run too | 19:38 |
sandywalsh | nova-console wouldn't be accessible to the outside world though, would it? | 19:39 |
sandywalsh | only api is | 19:39 |
dragondm | the console servers will have public interfaces. and no nova-console is not accessable to the outside world | 19:40 |
dragondm | you call the openstack api, and get back a host/port/password combo that you can connect to w/ a vnc client | 19:41 |
uvirtbot | New bug: #700015 in nova "Headers in virt/images.py function _fetch_s3_image are sent improperly to curl" [Undecided,New] https://launchpad.net/bugs/700015 | 19:41 |
dragondm | (in our case the client will be a java applet) | 19:41 |
*** nelson__ has quit IRC | 19:41 | |
sandywalsh | ah, ok, so console server and nova-console are different machines. | 19:41 |
*** nelson__ has joined #openstack | 19:42 | |
dragondm | Er? | 19:42 |
dragondm | define console-server ? | 19:42 |
*** Ryan_Lane is now known as Ryan_Lane|food | 19:42 | |
sandywalsh | you said above "the console servers will have public interfaces" | 19:42 |
dragondm | yes | 19:42 |
sandywalsh | these are different machines than the machines that nova-console runs on? | 19:43 |
*** leted has joined #openstack | 19:43 | |
dragondm | no | 19:43 |
sandywalsh | ok, so they offer public interfaces on a separate nic? | 19:44 |
dragondm | but the worker does not expose a public interface, (and presumably would talk ofer a separate network interface from the public) | 19:44 |
dragondm | yes | 19:44 |
dragondm | this is how, afaik, rackspace's current 'orange' code works | 19:44 |
sandywalsh | cool, can they run on different machines? | 19:45 |
dragondm | no. the nova-console manages the daemon process | 19:45 |
dragondm | (that's it's basic purpose) | 19:45 |
sandywalsh | ok, cool. do you think that will fly with other deployments? | 19:46 |
sandywalsh | but I suppose they can make a patch if they need it :) | 19:46 |
dragondm | if they are using xenserver... | 19:46 |
*** leted has quit IRC | 19:47 | |
sandywalsh | ok, that's a great help. Thank for the clarifications dragondm! | 19:47 |
dragondm | or if they add a driver for a different vnc proxy | 19:47 |
dragondm | ok | 19:47 |
sandywalsh | I'll summarize in the merge | 19:48 |
dragondm | yah. I was hoping the info I put in the blueprint spec explained that architecture | 19:48 |
sandywalsh | you should try the stubout thing mentioned ^^ | 19:48 |
dragondm | ok | 19:48 |
sandywalsh | would be nice to take the magic flags out | 19:48 |
sandywalsh | I may have missed that point | 19:49 |
sandywalsh | my biggest hurdle was the pool vs. proxy vs. driver thing | 19:49 |
*** befreax has quit IRC | 19:51 | |
dragondm | ya. Really the whole xvp+nova-console server is the proxy. 'consoles' are entries in the proxy's config, and 'pools' are collections of consoles proxied *from* a specific hypervisor host. | 19:51 |
sandywalsh | can one xvp proxy from multiple hypervisors? | 19:52 |
dragondm | yes. that is why there are pools | 19:53 |
sandywalsh | perfect ... thx | 19:53 |
*** aliguori has quit IRC | 19:55 | |
sandywalsh | dragondm, approved | 19:57 |
sandywalsh | (I still have to get it running though) | 19:57 |
dragondm | stuill can't get xvp installed? | 19:58 |
dragondm | er, still | 19:59 |
EdwinGrubbs | letterj: are you still having login problems on Launchpad? | 20:00 |
sandywalsh | dragondm, haven't gotten to it yet | 20:01 |
dragondm | ah ok | 20:01 |
sandywalsh | dragondm, but there's nothing in your code that should bust the rest of trunk, so it's low risk | 20:01 |
sandywalsh | vishy, that was rather a long thread above, but did you get the gist of it? | 20:02 |
sandywalsh | vishy, the nova-console server runs the proxy as well and the assumption is that machine is dual-nic (one public, one private) | 20:02 |
*** dfg_ has quit IRC | 20:02 | |
dragondm | also, nova workers don't expose an interface anyway (since they make the connection to rabbit) | 20:04 |
dubsquared | ryan_lane in the house? | 20:05 |
*** Ryan_Lane|food is now known as Ryan_Lane | 20:05 | |
Ryan_Lane | just got back :) | 20:05 |
sandywalsh | dragondm, good point | 20:05 |
dubsquared | nice | 20:05 |
dubsquared | ryan_lane: bug 700015 - this causing issues with the UEC images not being able to boot….and i suppose every other image as well | 20:06 |
uvirtbot | Launchpad bug 700015 in nova "Headers in virt/images.py function _fetch_s3_image are sent improperly to curl" [Undecided,New] https://launchpad.net/bugs/700015 | 20:06 |
dubsquared | that one | 20:06 |
dubsquared | lol | 20:06 |
Ryan_Lane | dubsquared: I'd imagine :) | 20:06 |
Ryan_Lane | not sure about public images | 20:06 |
Ryan_Lane | definitely for private ones | 20:06 |
Ryan_Lane | there's a few other bugs like this I'm tracking down | 20:06 |
eday | tr3buchet: you around? | 20:07 |
dragondm | sandywalsh: so the console box could be single nic, as well, with a firewall in front of it that only allowed public access to the vnc port(s) | 20:07 |
tr3buchet | eday yes | 20:07 |
dubsquared | nice, i just noticed yesterday there was a merge that broke the packages…and been swimming through logs trying to see what happened | 20:07 |
Ryan_Lane | for instance, in this pastebin: http://pastebin.com/VB0kFV5Y | 20:07 |
eday | tr3buchet: is set_admin_password a function that should not happen if a lock on the instance is active? | 20:07 |
Ryan_Lane | -append root=/dev/vda1 console=ttyS0 | 20:08 |
Ryan_Lane | should be: | 20:08 |
Ryan_Lane | -append 'root=/dev/vda1 console=ttyS0' | 20:08 |
dubsquared | http://paste.openstack.org/show/445/ and http://paste.openstack.org/show/446/ is the errors im dealing with | 20:08 |
tr3buchet | eday, that's a good question. i had originally planned on lock preventing changes of state | 20:08 |
sandywalsh | dragondm, but it needs access to the private network to talk to the rabbit cluster doesn't it? | 20:08 |
openstackhudson | Project nova build #368: SUCCESS in 1 min 20 sec: http://hudson.openstack.org/job/nova/368/ | 20:09 |
tr3buchet | but i think it should also apply to set_admin password | 20:09 |
Ryan_Lane | dubsquared: my fix would solve at least part of your problem | 20:09 |
eday | tr3buchet: ok | 20:09 |
dragondm | yes | 20:09 |
Ryan_Lane | this is invalid to send to curl: -H Authorization: AWS d1fb4c58-7cd0-4e18-80dc-f362a71c0390:dubproj:L+9AEtMYn3PA0zUEhhQgSG14CqE= | 20:09 |
dubsquared | ryan_lane: /me cheers | 20:10 |
dragondm | sandywalsh: thus the firewall | 20:10 |
dubsquared | yeah, that is what i was thinking | 20:10 |
Ryan_Lane | it must be wrapped in quotes | 20:10 |
sandywalsh | right | 20:10 |
dubsquared | sooooo…'part' of my problem… | 20:10 |
Ryan_Lane | I have a feeling you'll run into more problems after patching that ;) | 20:11 |
Ryan_Lane | I am | 20:11 |
Ryan_Lane | due to other things that need to be quoted | 20:11 |
dubsquared | crap | 20:12 |
Ryan_Lane | tracking em down :) | 20:12 |
dubsquared | I have a handful of images built…9.04, 10.04, 10.10, and cents 5.4 and centos 5.5 ... | 20:13 |
dubsquared | so i just wiped a box to restart the entire process just to ensure it was seamless... | 20:13 |
dubsquared | and kaboom | 20:13 |
dubsquared | :| | 20:13 |
letterj | EdwinGrubbs: Yes | 20:15 |
*** lvaughn has quit IRC | 20:16 | |
*** lvaughn has joined #openstack | 20:17 | |
letterj | EdwinGrubbs: I just did the "Forgot Password" process again | 20:17 |
*** kevnfx has joined #openstack | 20:18 | |
EdwinGrubbs | letterj: do you have multiple email addresses on your account? | 20:19 |
letterj | Yes | 20:20 |
EdwinGrubbs | letterj: can you try "Forgot my password" with the other ones. Unfortunately, the login.launchpad.net accounts don't always match up exactly with the launchpad accounts. | 20:20 |
EdwinGrubbs | letterj: I just noticed that you have 666 karma. I think that is considered really bad karma. | 20:22 |
*** aliguori has joined #openstack | 20:22 | |
gholt | We are talking about letterj here. ;) | 20:23 |
letterj | EdwinGrubbs: Using the other email address worked. Thanks for taking the time to help me. | 20:24 |
EdwinGrubbs | letterj: If you go back to https://login.launchpad.net/ , you should see a management screen with a "Manage email addresses" link so you can get your other email address correctly connected to the account. | 20:27 |
*** kevnfx has quit IRC | 20:27 | |
*** joearnold has quit IRC | 20:27 | |
vishy | Ryan_Lane what is that b64 encoded string? | 20:28 |
vishy | sandywalsh: followed thanks | 20:29 |
creiht | lol | 20:29 |
openstackhudson | Project nova build #369: SUCCESS in 1 min 21 sec: http://hudson.openstack.org/job/nova/369/ | 20:29 |
creiht | EdwinGrubbs: is there any way to lock letterj's karma to 666? :) | 20:29 |
sandywalsh | vishy, does that work for your deployment model? | 20:30 |
EdwinGrubbs | creiht: I believe you need to use an incantation. | 20:30 |
creiht | maybe and IncantationFactory | 20:30 |
* EdwinGrubbs casts fireball at that joke | 20:31 | |
vishy | sandywalsh: we aren't using xen so no | 20:32 |
vishy | :) | 20:32 |
vishy | sandywalsh: but i think it could work with an equivalent proxy service. | 20:32 |
sandywalsh | vishy, right, I just mean if you had a restriction of having to put the proxy on the console-server with dual-nics. Would that fly? | 20:34 |
creiht | heh | 20:34 |
vishy | sandywalsh: yes probably, I don't think there is any reason why you couldn't run it on the network or api host | 20:35 |
sandywalsh | vishy, excellent. thx | 20:37 |
dragondm | vishy: yah, you could run the console service on one of the other hosts | 20:43 |
dragondm | we (rs) just separate them for security + scaling | 20:43 |
Ryan_Lane | vishy: which encoded string? | 20:44 |
vishy | Ryan_Lane: at the end of the curl header you pasted above? | 20:45 |
Ryan_Lane | vishy: pasted from what curl was trying to run | 20:46 |
Ryan_Lane | authorization header | 20:46 |
Ryan_Lane | related to the last bug/merge I made | 20:46 |
vishy | Ryan_Lane: ah i guess it is the b64 encoded hmac for the request to s3. I was just curious why it never failed before. Perhaps it only fails if there happens to be a + in the header? | 20:50 |
Ryan_Lane | likely | 20:50 |
Ryan_Lane | it's a special char causing the problem | 20:50 |
Ryan_Lane | it's something getting interpreted by bash | 20:50 |
vishy | maybe = | 20:50 |
vishy | because that code hasn't changed recently, so it was odd to see it suddenly fail | 20:51 |
* Ryan_Lane nods | 20:51 | |
Ryan_Lane | I'm having other issues too :( | 20:51 |
Ryan_Lane | something bad must be getting passed to qemu on my nodes | 20:52 |
dubsquared | +1 | 20:52 |
Ryan_Lane | I can't get instances to start | 20:52 |
vishy | bad qemu | 20:52 |
Ryan_Lane | there is a bug in the template | 20:52 |
Ryan_Lane | for sure | 20:52 |
*** joearnold has joined #openstack | 20:52 | |
vishy | oh? | 20:52 |
Ryan_Lane | or libvirt | 20:52 |
vishy | are you running kvm or qemu? | 20:52 |
Ryan_Lane | qemu | 20:52 |
vishy | might be out of ram | 20:53 |
vishy | that causes things to fail | 20:53 |
Ryan_Lane | could be that too :) | 20:53 |
vishy | very badly | 20:53 |
Ryan_Lane | but... | 20:53 |
Ryan_Lane | <cmdline>root=/dev/vda1 console=ttyS0</cmdline> | 20:53 |
dubsquared | vishy: full logs from the issue yesterday —> http://paste.openstack.org/show/445/ and http://paste.openstack.org/show/446/ | 20:53 |
Ryan_Lane | ^^ seems to be problematic | 20:53 |
uvirtbot | Ryan_Lane: Error: "^" is not a valid command. | 20:53 |
Ryan_Lane | on the command line that is turned into: -append root=/dev/vda1 console=ttyS0 | 20:54 |
Ryan_Lane | where it should likely be -append 'root=/dev/vda1 console=ttyS0' | 20:54 |
Ryan_Lane | memory was definitely a problem :) | 20:57 |
*** ctennis has quit IRC | 21:00 | |
Ryan_Lane | template wasn't a problem at all it seems :) | 21:02 |
Ryan_Lane | \o/ | 21:02 |
*** pothos_ has joined #openstack | 21:02 | |
*** spsneo has left #openstack | 21:03 | |
*** nelson__ has quit IRC | 21:03 | |
*** nelson__ has joined #openstack | 21:03 | |
*** pothos has quit IRC | 21:04 | |
*** pothos_ is now known as pothos | 21:04 | |
uvirtbot | New bug: #700106 in glance "DescribeRegions is not working" [Undecided,New] https://launchpad.net/bugs/700106 | 21:06 |
Ryan_Lane | bah. UEC images seem to require UEC :( | 21:11 |
Ryan_Lane | 2011-01-07 21:02:40,499 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id | 21:12 |
nelson__ | Is there a program which just tests the auth server? | 21:13 |
nelson__ | Or has that not been viewed as a priority because it's just a dev tool? | 21:13 |
creiht | nelson__: well there are unit and functional tests, but beyond that, I don't think so | 21:14 |
creiht | nelson__: Is there something in particular that you were wanting to test? | 21:15 |
*** ctennis has joined #openstack | 21:16 | |
*** ctennis has joined #openstack | 21:16 | |
vishy | soren, ttx, dendrobates: the ppa is broken i have a fix here | 21:17 |
nelson__ | Still trying to figure out what's going on with my upload problem. | 21:18 |
vishy | http://pastie.org/1438394 | 21:18 |
nelson__ | it's almost as if something is contacting the account server when it should be contacting the container server. | 21:18 |
nelson__ | but I checked the logs and the port numbers. | 21:18 |
vishy | Ryan_Lane: which networking mode are you using? | 21:19 |
Ryan_Lane | vishy: flat | 21:19 |
nelson__ | So I was wondering if there were stand-alone tests for the container service or account service. | 21:19 |
vishy | Ryan_Lane, ok then you'll have to set up a manual forward for metadata from your gateway | 21:19 |
Ryan_Lane | oh. it can access it? | 21:19 |
vishy | api server provides metadata | 21:19 |
Ryan_Lane | ahhh. ok | 21:20 |
creiht | nelson__: Well there is a check that is done to see if the account exists, and that part is failing | 21:20 |
Ryan_Lane | how do I go about that? | 21:20 |
vishy | forwarding rules are set up automatically for flatdhcp and vlan | 21:20 |
vishy | you need to create an iptables type rule on your gateway | 21:20 |
creiht | for some reason it is appending your container name to the account, when it is checking to see if the account exists | 21:20 |
creiht | and that is why it 404s | 21:20 |
creiht | I traced all that down yesterday evening just to verify | 21:20 |
creiht | how it happens, I'm not entirely sure | 21:20 |
nelson__ | right. that's why I was wondering if somehow I misconfigured the account server on the container service's address. | 21:21 |
creiht | hrm | 21:21 |
nelson__ | I think I'll look at the proxy next to see what it's calling.... | 21:21 |
vishy | iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <api_ip>:8773 | 21:22 |
Ryan_Lane | awesome. thanks | 21:22 |
creiht | nelson__: can you run swift-ring-builder account.builder | 21:22 |
creiht | and | 21:22 |
vishy | Ryan_Lane: np | 21:22 |
creiht | swift-ring-builder container.builder | 21:22 |
creiht | and paste the output? | 21:22 |
nelson__ | GASP! | 21:23 |
Ryan_Lane | vishy: in this mode do I also need to handle NAT manually for floating IPs? | 21:23 |
nelson__ | oh, wait, no, | 21:23 |
nelson__ | they're all accessing devices on port 6002, but ... that's storage. | 21:23 |
vishy | Ryan_Lane: floating IP natting will probably be nasty | 21:24 |
creiht | account, container, and storage should all have different ports | 21:24 |
Ryan_Lane | is that pretty much only supported well in vlan mode? | 21:24 |
vishy | Ryan_Lane: it all works fine as long as the network node is also the gateway | 21:24 |
Ryan_Lane | yeah | 21:24 |
vishy | Ryan_Lane: it should work fine in flatdhcp | 21:24 |
Ryan_Lane | I'm already planning on doing that :) | 21:25 |
creiht | nelson__: by default, object is 6000, container is 6001, and account is 6002 | 21:25 |
vishy | someone was supposed to make a set of instructions about using flatdhcp with only one interface, can't remember who know | 21:25 |
vishy | s/know/now | 21:25 |
Ryan_Lane | I'd love to be able to have more than one interface ;) | 21:26 |
Ryan_Lane | that's likely coming in cactus, right? | 21:26 |
nelson__ | http://paste.openstack.org/show/447/ | 21:27 |
Ryan_Lane | General error mounting filesystems. | 21:27 |
Ryan_Lane | A maintenance shell will now be started. | 21:27 |
Ryan_Lane | :D | 21:27 |
creiht | nelson__: yeah your container ring isn't right | 21:27 |
nelson__ | cool! | 21:27 |
creiht | I would also check object.builder just to be sure | 21:27 |
nelson__ | bad is good! | 21:27 |
creiht | indeed | 21:27 |
nelson__ | :) | 21:27 |
creiht | I was tossing and turning last night, racking my brain trying to figure out what was causing the issue | 21:28 |
creiht | nelson__: so you have everything going to one hard drive on each machine? | 21:28 |
nelson__ | yes, 5 machines. | 21:29 |
creiht | k | 21:29 |
nelson__ | dedicated filesystem. XFS | 21:29 |
Ryan_Lane | vishy: is there some way around natting the floating IPs? | 21:29 |
*** leted has joined #openstack | 21:30 | |
colinnich | creiht: I'm getting slick at setting up swift machines now - on my 3rd storage server of the session and finished from a clean install of ubuntu on a 4 drive machine in about 8 minutes :-) | 21:32 |
vishy | Ryan_Lane: not really, it would require a lot of code changes | 21:32 |
* Ryan_Lane nods | 21:33 | |
Ryan_Lane | so is the natting something that nova takes care of, or something I need to do manually? | 21:33 |
*** westmaas has joined #openstack | 21:33 | |
vishy | Ryan_Lane: it does the natting, but the outgoing route will be all messed up | 21:33 |
vishy | Ryan_Lane: so outgoing packets will look like they are coming from the wrong ip which i suspect will cause all sorts of strange issues. | 21:34 |
creiht | con:) | 21:34 |
creiht | erm | 21:34 |
creiht | colinnich: :) | 21:34 |
Ryan_Lane | vishy: how do I avoid this issue? | 21:35 |
vishy | use flatdhcp and let nova create the rules for you? | 21:35 |
vishy | :) | 21:35 |
Ryan_Lane | I think that'll be more than doable :) | 21:35 |
Ryan_Lane | nova.network.manager.FlatDhcpManager? | 21:36 |
vishy | aye, the tricky thing is setting up multiple hosts with that guy | 21:36 |
Ryan_Lane | heh | 21:36 |
vishy | if you have 2 eth devices it is easier | 21:37 |
nelson__ | creiht: okay, so I'm going to go through everything checking the port numbers. probably swapped something during the config. Thanks for your help. I'll let you know when I've got it figured out. | 21:37 |
Ryan_Lane | there's always a catch! | 21:37 |
Ryan_Lane | vishy: I have 4, in fact :D | 21:37 |
creiht | nelson__: sounds good | 21:37 |
vishy | oh in that case it shouldn't be too bad | 21:37 |
vishy | just make sure that when you set --flat_interface in your flagfile that it is an interface that doesn't already have an ip on it | 21:38 |
vishy | it can be a vlan or whatever | 21:38 |
Ryan_Lane | is there documentation for this mode? | 21:39 |
Ryan_Lane | no examples on the doc site :( | 21:39 |
vishy | Ryan_Lane: the tricky part is if you only have one interface on the host, and you can't make vlans. You have to add the hosts public ip to the bridge and bridge it into the interface and use that bridge for the private ip as well | 21:40 |
vishy | Ryan_Lane: examples are slim, the guy who I was working with on the one interface version was supposed to write something | 21:41 |
vishy | Ryan_Lane: i think it was patri0t??? | 21:41 |
Ryan_Lane | well, I have more than one interface... | 21:41 |
vishy | Ryan_Lane: there are some nice docstrings in nova/network/manager.py | 21:41 |
Ryan_Lane | is there a multiple interface version of the doc? | 21:41 |
Ryan_Lane | ok | 21:41 |
vishy | Ryan_Lane: it is mostly just setting a few flags | 21:42 |
Ryan_Lane | oh. ok | 21:42 |
Ryan_Lane | shouldn't be too bad then | 21:42 |
Ryan_Lane | vishy: thanks :) | 21:42 |
*** maplebed has quit IRC | 21:53 | |
*** maplebed has joined #openstack | 21:53 | |
*** lvaughn_ has joined #openstack | 21:54 | |
*** soliloquery has quit IRC | 21:55 | |
*** fabiand_ has quit IRC | 21:56 | |
*** lvaughn has quit IRC | 21:56 | |
*** ccustine has quit IRC | 21:57 | |
*** westmaas has quit IRC | 21:58 | |
*** arcane has joined #openstack | 22:00 | |
*** jdarcy has quit IRC | 22:02 | |
nelson__ | creiht: oops. swift-ring-builder gets unhappy when you try to remove everything. | 22:04 |
nelson__ | I may just discard and rebuild from scratch, to get going. but I'll fix the bug later. | 22:05 |
creiht | nelson__: yeah it would probably be better to rebuild from scratch | 22:05 |
colinnich | creiht: swauth.... how do you authenticate against it? ie what url for st? | 22:05 |
creiht | gholt: -^ | 22:06 |
colinnich | creiht: :-) | 22:06 |
creiht | colinnich: I'm reviewing the merge prop right now, but haven't gotten that far yet :) | 22:06 |
colinnich | creiht: I tried to read the built docs, but they are on a remote server and I'm reading an html file using more. Perhaps not the best plan :-) | 22:07 |
*** cynb has left #openstack | 22:08 | |
creiht | colinnich: in your source tree, it will probably be easier to read doc/source/overview_auth.rst | 22:08 |
colinnich | creiht: looks to have created users ok, can see them on the storage nodes and swauth-list works | 22:09 |
colinnich | creiht: will do | 22:09 |
vishy | eday: want to take a quick look, I don't want to approve until your Needs Information has been changed. | 22:10 |
rlucio | anyone know how often the new builds are pushed into the trunk ppa? its still showing build 527, and we are on 529 now | 22:10 |
vishy | eday: https://code.launchpad.net/~soren/nova/iptables-security-groups/+merge/43767 | 22:10 |
vishy | rlucio: ppa is broken, trying to find someone who can fix it | 22:11 |
rlucio | vishy: ok thx for the info | 22:11 |
nelson__ | oh fail, rebalance REALLY doesn't like rebalancing down to one storage server. | 22:13 |
creiht | lol | 22:13 |
creiht | um yeah... don't do that :) | 22:13 |
nelson__ | yeah, probably not a real use case. | 22:13 |
nelson__ | in my copious spare time I'll fix it. probably when I add test cases to the docstrings. | 22:14 |
soren | vishy: PPA is broken how? | 22:17 |
creiht | colinnich: My first guess is that you point your auth to a proxy:8080 | 22:17 |
gholt | colinnich: For an saio, I use st -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat | 22:17 |
soren | vishy: eday's concerns have already been adressed on that mp. | 22:17 |
gholt | But the main part is that you put auth in front of whatever the path was. | 22:18 |
creiht | ahh | 22:18 |
vishy | soren: yes the nova-manage patch is failing | 22:18 |
creiht | colinnich: ignore what I said :) | 22:18 |
vishy | soren: http://pastie.org/1438394 | 22:18 |
creiht | 8080 is the saio port for the proxy | 22:18 |
vishy | soren: yeah i figured, you also still have the branch marked wip | 22:18 |
soren | vishy: Good point. | 22:19 |
soren | vishy: Set back to "Needs review". | 22:19 |
gholt | colinnich, creiht: I went ahead and added the example st line to the description of the merge proposal. :) | 22:20 |
vishy | soren: so you think i can go ahead and approve it? | 22:20 |
creiht | gholt: cool, thanks | 22:20 |
soren | vishy: If you want to wait for eday's approval, that's fine with me. | 22:20 |
gholt | But letterj has found a couple other issues I need to work through. One is internal/external urls and the other is purging tokens. | 22:20 |
creiht | k | 22:21 |
soren | vishy: Ah. /me looks at the ppa thing | 22:22 |
soren | vishy, rlucio: PPA build fixed. New packages on their way. | 22:27 |
vishy | soren: wh00t | 22:27 |
rlucio | soren: thanks for the heads up | 22:27 |
soren | ARGH! | 22:27 |
soren | There's 2709 jobs in the PPA queue. | 22:27 |
vishy | soren: you noticed that one of your merges failed with no test output? | 22:28 |
soren | estimated 14 hours. | 22:28 |
soren | vishy: Nope. | 22:28 |
*** spectorclan has quit IRC | 22:28 | |
vishy | soren: the version one | 22:28 |
soren | Weird. | 22:28 |
* soren investigates | 22:28 | |
uvirtbot | New bug: #700140 in nova "DescribeRegions is not working" [Undecided,New] https://launchpad.net/bugs/700140 | 22:31 |
soren | uvirtbot: I have a patch for that! | 22:32 |
uvirtbot | soren: Error: "I" is not a valid command. | 22:32 |
jk0 | soren: you snooze, you lose | 22:32 |
jk0 | :P | 22:32 |
vishy | soren: i thought the trunk ppa was built and uploaded by hudson? | 22:34 |
soren | vishy: It is? | 22:34 |
soren | vishy: Oh, I see what you mean. | 22:34 |
soren | vishy: Hudson builds source packages. | 22:34 |
soren | vishy: Uploads them to Launchpad which then builds the packages. | 22:34 |
vishy | soren: i see | 22:34 |
vishy | soren: is your patch the same as the one in the bug? | 22:35 |
soren | vishy: Maybe. Haven't looked at the bug. | 22:36 |
*** aliguori has quit IRC | 22:36 | |
soren | vishy: No. | 22:36 |
soren | vishy: Mine is wrong. His isn't. :) | 22:36 |
*** glenc_ has joined #openstack | 22:37 | |
vishy | soren: loflcopter | 22:37 |
Ryan_Lane | vishy: would you happen to see blatently wrong with this config: http://pastebin.com/fzzjESyY | 22:39 |
Ryan_Lane | ? | 22:39 |
*** glenc has quit IRC | 22:39 | |
vishy | you don't need vlan start | 22:40 |
vishy | flat_interface needs to be an interface | 22:40 |
Ryan_Lane | oh? | 22:40 |
Ryan_Lane | ah. can't be a bridge? | 22:41 |
vishy | not a bridge | 22:41 |
vishy | correct | 22:41 |
Ryan_Lane | on the compute nodes as well? | 22:41 |
vishy | it creates br100 and bridges into an interface | 22:41 |
vishy | correct | 22:41 |
Ryan_Lane | ok | 22:41 |
Ryan_Lane | and if I wanted it to use vlan 103? | 22:41 |
vishy | flat_interface=vlan103 | 22:41 |
vishy | should be fine | 22:42 |
colinnich | gholt, creiht: that's it now, thanks | 22:42 |
Ryan_Lane | ah. so I can make a vlan interface, and tell it to use that | 22:42 |
Ryan_Lane | thanks | 22:42 |
vishy | yes | 22:42 |
vishy | and you need to specify the same on the compute hosts | 22:43 |
Ryan_Lane | awesome :) | 22:43 |
*** ppetraki has quit IRC | 22:44 | |
*** enigma has left #openstack | 22:44 | |
*** gasbakid has quit IRC | 22:46 | |
*** adiantum has joined #openstack | 22:47 | |
vishy | soren: we want to set up our own ppa. Is there a good set of instructions somewhere? | 22:48 |
*** troytoman has quit IRC | 22:54 | |
*** brd_from_italy has quit IRC | 22:59 | |
Ryan_Lane | vishy: for this do I need to have the network service installed on all compute nodes too? | 23:01 |
vishy | nope | 23:01 |
vishy | just one network host needed | 23:01 |
Ryan_Lane | hmm | 23:01 |
jeremyb | will this have much HA? | 23:02 |
jeremyb | tesla's 1 box now? | 23:02 |
Ryan_Lane | vishy: any reason instances may be stuck in "networking" state? | 23:03 |
Ryan_Lane | jeremyb: well, it won't be very HA without migration and other HA features either ;) | 23:03 |
jeremyb | Ryan_Lane: i meant the controller itself | 23:03 |
vishy | do you get any errors in the log | 23:03 |
Ryan_Lane | nope | 23:03 |
vishy | sounds like the call to network to get the ip is failing | 23:03 |
jeremyb | and if there's a "networking" service that too | 23:04 |
vishy | oh | 23:04 |
vishy | you need to set --network_host flag on compute hosts | 23:04 |
Ryan_Lane | ahhhhhh | 23:04 |
Ryan_Lane | ok | 23:04 |
vishy | so they know which one to use | 23:04 |
*** trin_cz has quit IRC | 23:05 | |
Ryan_Lane | jeremyb: dunno. that's a good question | 23:06 |
Ryan_Lane | vishy: with this configuration, I'm pretty dependent on the network node, right? is there any way to have redundancy on it? | 23:06 |
jeremyb | does it tunnel everything through there or it just hands out IPs? | 23:06 |
Ryan_Lane | jeremyb: tunnel | 23:06 |
Ryan_Lane | NAT | 23:06 |
jeremyb | huh | 23:06 |
jeremyb | why not ospf or some such? | 23:07 |
Ryan_Lane | all vms are on a private subnet by default | 23:07 |
vishy | Ryan_Lane: that is something that we are looking at. We're thinking hot spare might be the best way | 23:07 |
Ryan_Lane | yeah. likely | 23:07 |
Ryan_Lane | jeremyb: when I discussed this with mark, his first idea was "well, we'll just use NAT" :D | 23:07 |
soren | vishy: I don't know any off the top of my head. I could google for them for you, but you're a big boy. You can work it out :D | 23:08 |
colinnich | gholt: cosmetic copy and paste error in your overview-auth doc - "sat": "http://ord.storage.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9" for storage and servers | 23:08 |
* soren heads bedwards | 23:09 | |
vishy | soren: anthony tried one and it failed miserably, hence the q | 23:10 |
jeremyb | Ryan_Lane: my network foo needs some work and i think there may also be some confusing black magic somewhere | 23:12 |
uvirtbot | New bug: #700151 in nova "nova volume checks for /dev/vg which doesn't always exist" [Undecided,New] https://launchpad.net/bugs/700151 | 23:12 |
* jeremyb heads to datacenter | 23:12 | |
Ryan_Lane | heh | 23:12 |
sleepsonthefloor | I notice now that our instance_ids are short, like "i-1" - is that intentional? | 23:14 |
*** dirakx has quit IRC | 23:14 | |
*** jdurgin has quit IRC | 23:15 | |
*** aliguori has joined #openstack | 23:17 | |
*** hggdh has quit IRC | 23:18 | |
*** aliguori has joined #openstack | 23:20 | |
eday | sleepsonthefloor: yes, we removed the random secondary id | 23:22 |
openstackhudson | Project nova build #370: FAILURE in 10 sec: http://hudson.openstack.org/job/nova/370/ | 23:23 |
sleepsonthefloor | eday: ok cool, looked a little funny so I wanted to ask. Definitely easier to type though :) | 23:23 |
Ryan_Lane | is there supposed to be a dhcp process running on the network server? | 23:26 |
Ryan_Lane | cause I'm seeing arps on the flat dhcp network, but nothing is answering it | 23:27 |
Ryan_Lane | missing dnsmasq :( | 23:30 |
colinnich | gholt: swift-bench doesn't seem to work. It is looking for NamedLogger in utils.py which isn't in the Bexar code. From what I can make out, the version in your branch is out of date - the trunk swift-bench uses NamedFormatter | 23:33 |
colinnich | gholt: probably not a real problem, but thought I'd let you know | 23:33 |
colinnich | gholt: (your swauth2 branch) | 23:33 |
colinnich | gholt: tried copying the trunk swift-bench in in the hope that it worked, but of course it didn't - failed with a formatting error | 23:37 |
* colinnich is off to bed | 23:41 | |
jeremyb | good night | 23:41 |
colinnich | jeremyb: :-) | 23:41 |
*** hggdh has joined #openstack | 23:43 | |
*** reldan has joined #openstack | 23:43 | |
*** gondoi has quit IRC | 23:46 | |
*** dubsquared has quit IRC | 23:51 | |
*** hggdh has quit IRC | 23:51 | |
*** hggdh has joined #openstack | 23:52 | |
uvirtbot | New bug: #700162 in glance "Can't upload image more than 2GB using glance.client" [Undecided,New] https://launchpad.net/bugs/700162 | 23:56 |
*** dwight_ has quit IRC | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!