*** ppetraki has joined #openstack | 00:00 | |
*** metcalfc has quit IRC | 00:04 | |
*** metcalfc has joined #openstack | 00:04 | |
*** gustavomzw has quit IRC | 00:10 | |
*** scottie has joined #openstack | 00:17 | |
*** lmcdowell has quit IRC | 00:22 | |
*** jeromatron has joined #openstack | 00:33 | |
jeromatron | anyone know if the Compute project is planning on providing the capability to have multiple IO devices? | 00:36 |
---|---|---|
*** tobym has joined #openstack | 00:36 | |
*** miclorb has quit IRC | 00:37 | |
*** localhost has quit IRC | 00:45 | |
*** localhost has joined #openstack | 00:47 | |
*** tobym has quit IRC | 00:47 | |
*** localhost has quit IRC | 00:48 | |
*** localhost has joined #openstack | 00:54 | |
*** burris has quit IRC | 00:57 | |
*** miclorb_ has joined #openstack | 00:59 | |
*** joearnol_ has quit IRC | 01:06 | |
*** ppetraki has quit IRC | 01:07 | |
*** mtaylor has quit IRC | 01:16 | |
*** tobym has joined #openstack | 01:21 | |
*** maplebed has quit IRC | 01:25 | |
*** ArdRigh has joined #openstack | 01:28 | |
*** burris has joined #openstack | 01:30 | |
*** jeromatron has quit IRC | 01:33 | |
*** mtaylor has joined #openstack | 01:43 | |
*** ChanServ sets mode: +v mtaylor | 01:43 | |
*** sirp has quit IRC | 01:46 | |
*** sirp1 has joined #openstack | 01:46 | |
*** jeromatron has joined #openstack | 01:48 | |
*** ar1 has joined #openstack | 01:58 | |
*** scottie has quit IRC | 02:25 | |
*** lmcdowell has joined #openstack | 02:31 | |
*** miclorb_ has quit IRC | 02:33 | |
*** lmcdowell has quit IRC | 02:43 | |
*** mtaylor has quit IRC | 02:47 | |
*** sophiap has quit IRC | 02:47 | |
*** maple_be1 has quit IRC | 02:55 | |
*** metoikos has quit IRC | 02:56 | |
*** metoikos has joined #openstack | 02:57 | |
*** sophiap has joined #openstack | 02:58 | |
*** miclorb has joined #openstack | 03:11 | |
*** metoikos has quit IRC | 03:18 | |
*** metoikos has joined #openstack | 03:19 | |
*** xtoddx has joined #openstack | 03:27 | |
*** sophiap has quit IRC | 03:28 | |
*** kevnfx has joined #openstack | 03:43 | |
uvirtbot | New bug: #644075 in swift "account-reaper does not have an except/log global catchall" [Undecided,New] https://launchpad.net/bugs/644075 | 03:46 |
*** kashyapc has joined #openstack | 03:50 | |
*** jeromatron has quit IRC | 03:52 | |
*** jeromatron has joined #openstack | 03:53 | |
*** cyonyx has joined #openstack | 04:01 | |
*** stewart has quit IRC | 04:06 | |
*** stewart has joined #openstack | 04:19 | |
*** cyonyx has quit IRC | 04:20 | |
*** omidhdl has joined #openstack | 04:22 | |
*** kashyapc has quit IRC | 04:26 | |
*** tobym has quit IRC | 04:40 | |
*** sureshgv has quit IRC | 04:45 | |
*** sureshgv has joined #openstack | 04:45 | |
*** kashyapc has joined #openstack | 04:47 | |
*** ArdRigh has quit IRC | 04:50 | |
*** f4m8_ is now known as f4m8 | 04:54 | |
*** gaveen has joined #openstack | 04:55 | |
*** gaveen has joined #openstack | 04:55 | |
*** joearnold has joined #openstack | 04:55 | |
*** gaveen has joined #openstack | 04:56 | |
*** omidhdl1 has joined #openstack | 05:01 | |
uvirtbot | New bug: #644092 in nova "authorization not checked in ec2 api" [High,New] https://launchpad.net/bugs/644092 | 05:01 |
*** omidhdl has quit IRC | 05:01 | |
*** kevnfx has quit IRC | 05:04 | |
*** abecc has quit IRC | 05:14 | |
*** DubLo7 has quit IRC | 05:20 | |
*** DubLo7 has joined #openstack | 05:22 | |
*** omidhdl1 has left #openstack | 05:43 | |
*** miclorb has quit IRC | 05:43 | |
*** ibarrera has joined #openstack | 05:45 | |
*** joearnold has quit IRC | 05:46 | |
*** Kami__ has joined #openstack | 06:00 | |
*** miclorb has joined #openstack | 06:11 | |
*** sirp1 has quit IRC | 06:25 | |
*** dele_ted has joined #openstack | 06:36 | |
*** chmouel_ has joined #openstack | 06:41 | |
*** chmouel has quit IRC | 06:42 | |
*** miclorb has quit IRC | 06:48 | |
*** allsystemsarego has joined #openstack | 06:48 | |
*** allsystemsarego has joined #openstack | 06:49 | |
*** brd_from_italy has joined #openstack | 07:00 | |
*** dele_ted has quit IRC | 07:12 | |
*** calavera has joined #openstack | 07:16 | |
*** Kami__ has quit IRC | 07:25 | |
*** Kami_ has joined #openstack | 07:26 | |
*** dele_ted has joined #openstack | 07:34 | |
*** zheng_li has joined #openstack | 07:40 | |
*** dele_ted has quit IRC | 08:05 | |
*** jtimberman has quit IRC | 08:34 | |
*** jtimberman has joined #openstack | 08:35 | |
*** mtaylor has joined #openstack | 09:12 | |
*** ChanServ sets mode: +v mtaylor | 09:12 | |
*** cw has quit IRC | 09:31 | |
*** cw has joined #openstack | 09:31 | |
*** chmouel_ is now known as chmouel | 09:36 | |
*** DubLo7 has quit IRC | 09:37 | |
*** gaveen has quit IRC | 10:17 | |
*** gaveen has joined #openstack | 10:30 | |
*** gaveen has quit IRC | 10:31 | |
*** gustavomzw has joined #openstack | 10:36 | |
*** sophiap has joined #openstack | 10:51 | |
*** cloudmeat1 has joined #openstack | 10:58 | |
*** cloudmeat has quit IRC | 11:01 | |
*** scottie has joined #openstack | 11:04 | |
*** ar1 has quit IRC | 11:08 | |
*** kashyapc has quit IRC | 11:20 | |
*** kashyapc has joined #openstack | 11:21 | |
*** pietro has quit IRC | 11:21 | |
*** ambo has quit IRC | 11:27 | |
*** ambo has joined #openstack | 11:27 | |
*** gustavomzw has quit IRC | 11:29 | |
*** ctennis has quit IRC | 11:34 | |
*** krzycoder has quit IRC | 11:38 | |
*** tobym has joined #openstack | 11:52 | |
*** pietro has joined #openstack | 12:04 | |
*** localhost has quit IRC | 12:09 | |
*** localhost has joined #openstack | 12:26 | |
*** sophiap has quit IRC | 12:28 | |
*** sophiap has joined #openstack | 12:29 | |
*** sophiap has quit IRC | 12:31 | |
*** ctennis has joined #openstack | 12:41 | |
*** gholt has quit IRC | 12:42 | |
*** letterj has quit IRC | 12:42 | |
*** clayg has quit IRC | 12:42 | |
*** dgoetz has quit IRC | 12:42 | |
*** sophiap has joined #openstack | 12:42 | |
*** tobym has quit IRC | 12:45 | |
*** hazmat has joined #openstack | 12:46 | |
*** klord has joined #openstack | 12:46 | |
*** piken has joined #openstack | 12:50 | |
*** kashyapc has quit IRC | 12:52 | |
*** hazmat has quit IRC | 12:55 | |
*** tobym has joined #openstack | 13:09 | |
*** blamar has quit IRC | 13:11 | |
*** ppetraki has joined #openstack | 13:14 | |
*** gondoi has joined #openstack | 13:16 | |
*** clayg has joined #openstack | 13:17 | |
*** gholt has joined #openstack | 13:19 | |
*** Podilarius has joined #openstack | 13:20 | |
*** krish has joined #openstack | 13:20 | |
*** dendrobates is now known as dendro-afk | 13:30 | |
*** gustavomzw has joined #openstack | 13:37 | |
chmouel | the /etc/sudoer.d/nova_sudoers from nova-common make sudo coredump backtrace pretty heavily! http://paste.openstack.org/show/28/ | 13:41 |
*** dendro-afk is now known as dendrobates | 13:45 | |
soren | chmouel: orly? Weirdness. | 13:53 |
soren | chmouel: Erk. | 13:53 |
chmouel | yeah it can't parse the file nfi why | 13:54 |
*** abecc has joined #openstack | 13:54 | |
chmouel | got to reboot in recoverty to get root again on that box | 13:55 |
*** matclayton has joined #openstack | 13:55 | |
matclayton | just wondering where do you set the replication factor in swift? and can it be changed on a per file basis? | 13:56 |
notmyname | matclayton: it's constant right now (global to the cluster | 13:57 |
creiht | matclayton: Currently it is set when creating the ring, though there are parts that still a bit hard coded to expect 3 replicas | 13:57 |
creiht | matclayton: do you want more or fewer replicas? | 13:58 |
matclayton | creiht: ok thanks, our use case is that some files become "hot" and therefore we need to put them on more than one machine to serve the load (or three even) | 13:58 |
creiht | ahh | 13:58 |
matclayton | but only for short periods | 13:58 |
creiht | so that is a bit different | 13:58 |
creiht | the replicas are more for durability | 13:58 |
matclayton | ok | 13:59 |
creiht | adding more replicas will not necessarily mean faster serving | 13:59 |
creiht | at least the way the code is written at the moment | 13:59 |
matclayton | essentially we run a music hosting site (lots of mp3s) and looking at using swift for storage | 13:59 |
creiht | a better solution to what you are talking about is to have some sort of hot cache in front of swift | 13:59 |
matclayton | is this a good solution | 14:00 |
matclayton | yeah we use to run a hot cache on ssd's and can easily redeply that | 14:00 |
creiht | at RS, we typically use the CDN integration for that type of use case | 14:00 |
matclayton | so whats the reasoning behind adding more replica's not increasing throughput | 14:00 |
matclayton | (we would increase the proxies as well) | 14:01 |
matclayton | on average the load is fairly predictable, its just some hot spot as content becomes popular which we have found to cause problems in the past | 14:01 |
creiht | well it isn't coded in a way to try to evenly distribute read request accross all replicas | 14:01 |
matclayton | oh does it prefer a primary one? | 14:02 |
creiht | it returns whichever one it finds first | 14:02 |
matclayton | ah ok | 14:02 |
creiht | which can be a bit arbitrary | 14:03 |
creiht | not that the code couldn't be changed to try do distribute the reads more evenly | 14:03 |
matclayton | sounds to me like that might self regulate a bit | 14:03 |
matclayton | or we can patch it, we are quite keen on swift being a python shop | 14:03 |
creiht | all that said, it seems like it would be better to use something in front of swift to cache the hot content | 14:04 |
matclayton | indeed, we would do that anyway | 14:04 |
*** f4m8 is now known as f4m8_ | 14:04 | |
matclayton | probably just setup a couple of varnish nodes | 14:04 |
creiht | we were just talking about that :) | 14:04 |
matclayton | the issue is our content size is average 100meg | 14:04 |
matclayton | we have done varnish in front of S3 before and its not without issues | 14:05 |
creiht | ahh | 14:05 |
matclayton | so holding them in memory doesn't work too well :) | 14:05 |
notmyname | what sort of issues? it's not a setup we run, but it's common enough that you aren't the first to mention something like that | 14:05 |
matclayton | the other question is, how performant is swift at delivering the files? what is the likely bottleneck? | 14:05 |
creiht | the bottleneck will most likely be the network | 14:06 |
notmyname | network to the storage nodes | 14:06 |
matclayton | between proxy and object? | 14:06 |
creiht | reads are quite fast | 14:06 |
creiht | yeah | 14:06 |
matclayton | is there anyway to just return a reference to files on disk and use that to serve files direct off object node through something like nginx | 14:06 |
creiht | matclayton: how many nodes are you running, and what type of networking are you using? | 14:07 |
matclayton | we run 4 storage nodes at the moment | 14:07 |
matclayton | about 20TB | 14:07 |
matclayton | storage | 14:07 |
matclayton | hold 500Mbit out going | 14:07 |
matclayton | grows by about 30-40% monthly | 14:07 |
matclayton | currently this is just 4 machines with nginx and a dumb file system | 14:08 |
matclayton | all machines have 1Gig connection, internal and external | 14:08 |
creiht | k | 14:08 |
creiht | so swift should be able to saturate your 500Mb pretty easily | 14:09 |
creiht | are you runing proxies on the storage nodes as well? | 14:09 |
matclayton | thats our current load, we are looking at switching to some other setup to be defined | 14:09 |
matclayton | we can run proxies wither on the same nodes or seperate | 14:09 |
creiht | k | 14:10 |
creiht | avg file size is 100MB, or is that the upper end? | 14:10 |
matclayton | basically because increases capacity is a real pain | 14:10 |
matclayton | currently average is 60Meg, however that is likely to increase | 14:10 |
creiht | k | 14:10 |
matclayton | we have 100M upload limit, which soon as we can, we would love to remove | 14:10 |
*** xfaf has joined #openstack | 14:11 | |
*** rnirmal has joined #openstack | 14:11 | |
creiht | so when an object becomes hot, what type of throughput are we talking about? | 14:12 |
matclayton | so the other thought we had bounced around, was can we setup nginx on the object nodes, and give it access to the filesystem beneath swift, and get swift just to return the node_address and filename, preventing the internal traffic | 14:12 |
matclayton | creiht: dont know the stats at the moment for that | 14:13 |
matclayton | but we can be looking at few thousand people listening to the same file | 14:13 |
creiht | matclayton: we played with using senfile once, but if I remember correctly it didn't make that much of a difference | 14:13 |
matclayton | for serving direct from the object node to the final clients, avoiding the proxy? | 14:14 |
creiht | I'm trying to recall... it has been a while :) | 14:14 |
matclayton | know that feeling, | 14:14 |
*** cloudmeat has joined #openstack | 14:15 | |
*** cloudmeat1 has quit IRC | 14:16 | |
*** xfaf has quit IRC | 14:16 | |
creiht | is there any way you can simulate a similar load? | 14:16 |
matclayton | we could try | 14:17 |
matclayton | not going to be easy, not really setup for it at the moment | 14:17 |
creiht | it would be interesting to test it first to see where the weak points are | 14:17 |
notmyname | the sendfile test was to see if avoiding user space (memory) would be faster. turned out that the network was a much more limiting factor than kernel/user memory space | 14:17 |
*** kevnfx has joined #openstack | 14:18 | |
matclayton | you are talking about proxy->obj store network? | 14:18 |
notmyname | we've mentioned exposing every storage node to public traffic, but I don't know that many (any) tests have been done on that | 14:18 |
notmyname | yes | 14:18 |
matclayton | so I was wondering if you can just remove that step | 14:18 |
notmyname | in our setup, proxy -> obj network is smaller than public -> proxy net, so that becomes the limiting factor | 14:19 |
creiht | matclayton: it is just code right? :) | 14:19 |
matclayton | exactly :) | 14:19 |
creiht | I can't think of anything that would prevent you from trying it | 14:19 |
matclayton | so is most of your traffic internal then? | 14:19 |
creiht | We have a good amount of both external and internal traffic | 14:20 |
creiht | not sure which is more actually :) | 14:20 |
matclayton | ah ok | 14:20 |
creiht | basically we have several trunked 10G connection coming in from the outside | 14:20 |
creiht | each proxy has 10G to the outside | 14:20 |
creiht | each storage node has 1G to everything | 14:21 |
matclayton | ok | 14:21 |
*** dendrobates is now known as dendro-afk | 14:23 | |
matclayton | cool, will give it a go as is and see how we get on, might try the serve direct approach. out of interest was there any specific reason it wasn't done that way originally? | 14:23 |
matclayton | suppose you need the client to be aware of the system | 14:23 |
creiht | we talked about it | 14:23 |
creiht | one of the main reasons that I can remember is that it is easier to handle failure scenarios if everything is going through the proxy | 14:24 |
creiht | And it was more than "good enough" :) | 14:24 |
matclayton | yeah makes sense, | 14:24 |
creiht | we also like that the actual storage nodes are on a private network, not accesible directly by the public networks | 14:24 |
matclayton | oh i think this is the last couple of questions. Any reason for xfs over ext4? or something we shoudl watch out for on ext4? | 14:24 |
*** kevnfx has quit IRC | 14:25 | |
creiht | for testing in our use case, xfs performance degraded much less over time than ext4 | 14:25 |
creiht | your use case is a bit different than ours, so I can't say weather or not there is a difference | 14:25 |
creiht | everything should work fine on ext4 | 14:26 |
*** dendro-afk is now known as dendrobates | 14:26 | |
matclayton | great thanks | 14:26 |
notmyname | just make sure that xattrs are enabled | 14:26 |
creiht | right | 14:26 |
matclayton | ah, likely to miss that, thanks | 14:26 |
creiht | you would find out pretty quick :) | 14:26 |
notmyname | you wouldn't miss it for long :-) | 14:26 |
*** chmouelb has joined #openstack | 14:28 | |
*** lmcdowell has joined #openstack | 14:34 | |
*** lmcdowell has joined #openstack | 14:34 | |
*** cloudmeat has quit IRC | 14:34 | |
*** krish has quit IRC | 14:36 | |
*** annegentle has joined #openstack | 14:37 | |
soren | chmouel: Apparantly, sudo gets very, very upset with you, if you sudoers file (or an included sudoers file) isn't mode 440. | 14:46 |
*** gundlach has joined #openstack | 14:52 | |
*** cloudmeat has joined #openstack | 14:52 | |
*** dendrobates is now known as dendro-afk | 14:53 | |
*** pharkmillups has joined #openstack | 14:55 | |
*** kashyapc has joined #openstack | 14:55 | |
uvirtbot | New bug: #644420 in swift "Typo in Getting Started in Swift doc page "developemnt"" [Undecided,New] https://launchpad.net/bugs/644420 | 14:56 |
*** dendro-afk is now known as dendrobates | 14:56 | |
jaypipes | soren: yep, that error forced me to boot in recovery mode, into a root shell simply to change the perms, as the nova_sudoers file caused sudo to segfault. Very annoying. | 14:58 |
*** sirp1 has joined #openstack | 15:01 | |
*** kevnfx has joined #openstack | 15:03 | |
*** matclayton has quit IRC | 15:04 | |
*** Podilarius has quit IRC | 15:07 | |
*** ctennis has quit IRC | 15:11 | |
jaypipes | dendrobates: pls set priority: https://blueprints.launchpad.net/nova/+spec/austin-nosql-datastore-adapter | 15:11 |
dendrobates | k | 15:12 |
dendrobates | done. | 15:12 |
*** kevnfx has quit IRC | 15:13 | |
chmouel | soren: yeah prob the permissions in the package should maybe get fixed, have you seen that prb before? | 15:14 |
*** Podilarius has joined #openstack | 15:14 | |
*** jeromatron has left #openstack | 15:15 | |
jaypipes | dendrobates: cheers :) | 15:18 |
*** ibarrera has quit IRC | 15:23 | |
*** tobym has quit IRC | 15:28 | |
*** gasbakid has joined #openstack | 15:30 | |
*** calavera has quit IRC | 15:36 | |
*** pvo has joined #openstack | 15:49 | |
*** ChanServ sets mode: +v pvo | 15:49 | |
gundlach | cerberus: are you aware of the text conflicts in the rs_auth mergeprop? | 15:53 |
gundlach | _cerberus_: ^^ | 15:53 |
*** kevnfx has joined #openstack | 15:53 | |
_cerberus_ | gundlach: I am not. I was enroute to the office | 15:53 |
_cerberus_ | I'll check now | 15:53 |
gundlach | cool, if you look at the mergeprop page you'll see at the top the list of text conflicts | 15:54 |
_cerberus_ | gundlach: merge conflicts fixed and repushed | 16:05 |
gundlach | great, i'll take a look | 16:05 |
jaypipes | _cerberus_: FYI, if you set the merge prop to Work in Progress while fixing up merge conflicts, then set it back to Needs Review, all reviewers will be notified to review again. Just a quick tip ;) | 16:06 |
_cerberus_ | jaypipes: yeah, I did that earlier. I didn't want to start spamming ;-) | 16:07 |
jaypipes | _cerberus_: :P) | 16:07 |
jaypipes | _cerberus_: gah, my typing sux today. | 16:07 |
_cerberus_ | heh | 16:08 |
*** maplebed has joined #openstack | 16:16 | |
*** joearnold has joined #openstack | 16:32 | |
*** metoikos has quit IRC | 16:33 | |
*** lmcdowell has quit IRC | 16:43 | |
*** jbryce has joined #openstack | 16:44 | |
soren | chmouel: I've pushed a fix to the packaging branch. It should be fixed by tomorrow. | 16:45 |
*** kevnfx has quit IRC | 16:48 | |
gundlach | vishy: I'm seeing errors in the quota unittests in one of my branches. One is test_too_many_addresses, which has a note from you saying that this test isn't working properly... should this test be excluded or something? | 16:48 |
*** jbryce has quit IRC | 16:50 | |
gundlach | (I'm getting a QuotaError) | 16:50 |
*** rlucio has joined #openstack | 16:56 | |
*** matclayton has joined #openstack | 16:56 | |
*** zheng_li has quit IRC | 17:11 | |
*** DubLo7 has joined #openstack | 17:12 | |
*** chmouelb has quit IRC | 17:18 | |
*** annegentle has quit IRC | 17:21 | |
*** dendrobates is now known as dendro-afk | 17:25 | |
*** hazmat has joined #openstack | 17:25 | |
*** dendro-afk is now known as dendrobates | 17:29 | |
*** pharkmillups has quit IRC | 17:29 | |
*** joearnold has quit IRC | 17:32 | |
*** matclayton has left #openstack | 17:43 | |
*** zheng_li has joined #openstack | 17:50 | |
*** joearnold has joined #openstack | 17:51 | |
gundlach | vishy: nm, i found it. you were using assertFailure incorrectly, i think -- you were supposed to return its value. my branch is off twisted so i replaced it with assertRaises. | 17:55 |
*** gasbakid has quit IRC | 17:55 | |
*** pietro has quit IRC | 17:58 | |
*** joearnold has quit IRC | 17:58 | |
*** dendrobates is now known as dendro-afk | 17:58 | |
*** joearnold has joined #openstack | 17:59 | |
*** kashyapc has quit IRC | 18:03 | |
notmyname | what's the bzr equivalent of git stash? | 18:15 |
hazmat | notmyname, bzr shelve ? | 18:15 |
notmyname | and git stash apply == bzr shelve apply? | 18:16 |
hazmat | notmyname, bzr unshelve | 18:16 |
hazmat | notmyname, bzr shelve --list to see what's shelved.. you can apply a symbolic name when you shelve | 18:16 |
notmyname | what about git stash clear? | 18:16 |
hazmat | notmyname, i'm not a git user so.. dunno.. there isn't a way to kill a shelve content, but you can continually append additional shelves.. and unshelve specific things | 18:18 |
notmyname | hmm..ok | 18:19 |
hazmat | notmyname, actually bzr shelve --destroy can destroy it.. | 18:19 |
hazmat | bzr shelve --help has the details | 18:19 |
notmyname | I've got uncommitted changed that conflict with a branch I want to merge in (but I don't want to loose my uncommitted changes) in git, I would git stash; git merge; git stash apply | 18:19 |
*** brd_from_italy has quit IRC | 18:20 | |
hazmat | notmyname, same thing with bzr .. bzr shelve --all -m "premergestuff" && bzr merge ../trunk && bzr unshelve premergestuff | 18:20 |
notmyname | thanks | 18:21 |
hazmat | notmyname, np | 18:21 |
hazmat | notmyname, there's also a very helpful #bzr channel on freenode | 18:21 |
*** pharkmillups has joined #openstack | 18:21 | |
notmyname | but why would I use that when you're here? ;-) | 18:21 |
notmyname | good to know. I'll use that in the future | 18:22 |
*** joearnold has quit IRC | 18:25 | |
*** metoikos has joined #openstack | 18:28 | |
*** Pentheus has joined #openstack | 18:32 | |
vishy | gundlach: returning the value from assertFailure solves the issue? | 18:39 |
*** annegentle has joined #openstack | 18:40 | |
gundlach | vishy: twisted.trial's documentation mentions that you must return the deferred from your testcase, so i imagine that was the cause of the problem you mentioned in your note. my branch moves onto eventlet, so i can just use the standrad unittest.TestCase.assertRaises. | 18:40 |
vishy | gotcha, i'll try with returns to see if that fixes it | 18:42 |
vishy | everyone: bpython | 18:42 |
vishy | coolest thing since sliced bread | 18:42 |
*** pietro__ has joined #openstack | 18:42 | |
*** blamar has joined #openstack | 18:44 | |
*** rlucio_ has joined #openstack | 18:44 | |
*** rlucio has quit IRC | 18:46 | |
*** rlucio_ is now known as rlucio | 18:46 | |
*** skippyish has left #openstack | 18:48 | |
gundlach | wow vishy, thanks; will switch from ipython | 18:48 |
gundlach | i love the autocomplete ui | 18:48 |
*** DubLo7 has quit IRC | 18:48 | |
*** pietro__ has quit IRC | 18:50 | |
*** dendro-afk is now known as dendrobates | 18:51 | |
*** DubLo7 has joined #openstack | 18:51 | |
gundlach | soren: i'm stuck trying to merge https://code.launchpad.net/~gundlach/nova/controllers-in-api/+merge/34795 again, same error as yesterday: "deleting parent in nova/endpoint". could you take a look, or point me to someone else to ask? | 18:51 |
gundlach | i'm getting to the point where i'm spending a couple hours a day just trying to get this to merge, which is silly :) | 18:51 |
pvo | is soren still around? | 18:52 |
gundlach | dunno, he's still in the room, but idle | 18:52 |
pvo | mtaylor might be lurking ... :) | 18:52 |
gundlach | ah, thx, last i checked mtaylor wasn't in here :) mtaylor: ^^ ? | 18:52 |
mtaylor | pvo: aroo? | 18:52 |
* gundlach cheers | 18:52 | |
gundlach | mtaylor, i'm in need of some serious hudson-launchpad-fu | 18:53 |
mtaylor | gundlach: I will do my best to provide such a thing :) | 18:53 |
gundlach | thx! :) so i've got two concurrent problems: | 18:53 |
vishy | gundlach: yeah it really rocks | 18:54 |
gundlach | 1) when i try to merge lp:~gundlach/nova/controllers-in-api to lp:nova, sometimes i get the "deleting parent in nova/endpoint" error mentioned above. i don't know how to convince trunk that it's *ok* that i'm deleting the nova/endpoint/ directory. | 18:54 |
*** allsystemsarego has quit IRC | 18:54 | |
gundlach | 2) when i do manage to merge, i'm getting unittest errors that look like eventlet 0.9.12 hasn't been installed on hudson yet to replace 0.9.10. on my local venv, i don't get those errors. | 18:54 |
vishy | gundlach: i just created a branch adding support to nova-manage for bpython | 18:54 |
gundlach | vishy: :D | 18:54 |
gundlach | mtaylor: so at the moment i'm trying to merge a branch that intentionally has a unittest fail by printing out eventlet.__version__, to prove to myself that 0.9.12 is really installed :) but #1 above is blocking that experiment at the moment. | 18:55 |
mtaylor | gundlach: hrm. ok. well, I thought soren was working on getting the eventlet upgrade done... lemme check | 18:56 |
mtaylor | and the other thing is weird... but lemme look at that too :) | 18:56 |
gundlach | yep, he's reported that it completed, which is why i'm confused | 18:56 |
gundlach | as usual, probably my lack of bzr/lp knowledge causing the prob, but i'm stuck hard enough to ask for help :) | 18:57 |
mtaylor | ok. no - eventlet 0.9.10 is still installed | 18:59 |
gundlach | WHEW | 18:59 |
* gundlach steps back from the abyss | 19:00 | |
mtaylor | gundlach: ok. 0.9.12 is now installed | 19:00 |
mtaylor | gundlach: ok - trying it again. | 19:06 |
mtaylor | gundlach: oh - didn't you say you pushed a deliberate test case fail? | 19:07 |
gundlach | yeah, and i just pushed the revert of that. it looks like my Approve got in before the revert. | 19:07 |
mtaylor | heh | 19:07 |
gundlach | i'll approve again :) | 19:07 |
mtaylor | I just did | 19:08 |
gundlach | so it looks like you fixed whatever the "delete parent endpoint" error was about -- however you did it, thank you. | 19:08 |
mtaylor | any time! | 19:12 |
gundlach | IT WORKED! | 19:12 |
gundlach | mtaylor: let me buy you a beer at the summit | 19:13 |
creiht | annegentle: welcome to your first merge :) | 19:13 |
annegentle | :) | 19:14 |
* annegentle makes friends with bzr | 19:14 | |
mtaylor | gundlach: I will allow you to buy me beer :) | 19:14 |
vishy | annegentle: bzr is a fickle friend | 19:18 |
annegentle | hee | 19:19 |
*** ChanServ sets mode: +v _cerberus_ | 19:20 | |
*** mtaylor has quit IRC | 19:20 | |
*** gustavomzw has quit IRC | 19:33 | |
*** _anm has joined #openstack | 19:39 | |
*** anm_ has quit IRC | 19:41 | |
*** _anm is now known as anm_ | 19:41 | |
*** metoikos has quit IRC | 19:42 | |
*** krish has joined #openstack | 19:50 | |
*** brd_from_italy has joined #openstack | 19:50 | |
dendrobates | Release meeting in 1 hour in #openstack-meeting | 20:00 |
*** jsgotangco has joined #openstack | 20:13 | |
*** skippyish has joined #openstack | 20:16 | |
*** pvo_ has joined #openstack | 20:21 | |
*** joearnold has joined #openstack | 20:22 | |
_cerberus_ | So it appears my merge failed. https://code.launchpad.net/~cerberus/nova/rs_auth/+merge/35727 I attempted to add a new model to the db for the API auth tokens. It would seem I need to do something else to facilitate that change | 20:25 |
_cerberus_ | Someone care to point me in the right direction? | 20:25 |
*** skippyish has quit IRC | 20:27 | |
*** skippyish has joined #openstack | 20:31 | |
gundlach | _cerberus_: did these tests pass in your branch, before you tried merging? | 20:32 |
_cerberus_ | gundlach: I failed to run the full suite :-/ | 20:33 |
_cerberus_ | My own tests passed, and I forgot to check the outer suite :-P | 20:34 |
gundlach | and, i've found it useful to "bzr branch lp:nova trunk" then cd to trunk/ and "bzr merge lp:<the branch i'm trying to merge in hudson>" to check if i get the same test failures as hudson does | 20:34 |
gundlach | ah! well, there you go :) | 20:34 |
*** skippyish has quit IRC | 20:34 | |
*** skippyish has joined #openstack | 20:35 | |
*** DaFrog has joined #openstack | 20:35 | |
*** ianw has joined #openstack | 20:35 | |
*** DaFrog has left #openstack | 20:35 | |
_cerberus_ | gundlach: hmmm, I'm likely doing something wrong, but they do all fail in trunk as well | 20:35 |
gundlach | your test failures look like something is really basically wrong e.g. | 20:36 |
gundlach | [ERROR]: nova.tests.access_unittest.AccessTestCase.test_002_allow_none | 20:36 |
gundlach | Traceback (most recent call last): | 20:36 |
gundlach | File "/var/lib/hudson/src/nova/hudson/nova/test.py", line 217, in run | 20:36 |
gundlach | self.setUp() | 20:36 |
gundlach | File "/var/lib/hudson/src/nova/hudson/nova/tests/access_unittest.py", line 76, in setUp | 20:36 |
gundlach | self.context.project = self.project | 20:36 |
gundlach | exceptions.AttributeError: 'AccessTestCase' object has no attribute 'project' | 20:36 |
gundlach | doesn't self.project get set in setUp()? | 20:36 |
gundlach | maybe you somehow made setUp() not get run on the tests? | 20:36 |
_cerberus_ | *that* seems really unlikely | 20:37 |
gundlach | when i have test errors in run_tests.sh, i modify run_tests.py to remove all but one "from some_tests import *" and add an "import logging" which seems to be required and then run them again | 20:37 |
*** pvo_ has quit IRC | 20:37 | |
gundlach | to help narrow it down. try that out and see what yo uget. | 20:38 |
*** rlucio has quit IRC | 20:38 | |
*** skippyish has quit IRC | 20:40 | |
*** skippyish has joined #openstack | 20:41 | |
*** krzycoder has joined #openstack | 20:43 | |
*** scottie has quit IRC | 20:45 | |
eday | gundlach: it looks like the cloudpipe API parts are still dependent on tornado/nova.endpoint modules... did you have a branch fixing this yet? | 20:46 |
uvirtbot | New bug: #644704 in swift "Change object-updater to track what containers have been updated" [High,New] https://launchpad.net/bugs/644704 | 20:46 |
gundlach | nope, i just got unit tests to pass. i hadn't realized those were still tied in. | 20:46 |
*** scottie has joined #openstack | 20:46 | |
gundlach | i'm not familiar with cloudpipe -- is that something that's going into Austin? | 20:46 |
eday | gundlach: well, it's been in there for a while, it's the VPN functions around servers | 20:47 |
gundlach | gotcha. | 20:47 |
eday | gundlach: things like bin/nova-manage still depend on endpoint as well :) | 20:47 |
gundlach | hmm, i guess they should have had unit tests ;) | 20:47 |
gundlach | i'll take a look -- i'm working on redoing access_unittest now that rbac has gone away | 20:48 |
eday | gundlach: yeah, probably should have. either way, should probably get this fixed up since trunk is broken, I'm going to poke at it some | 20:48 |
gundlach | that would be great! | 20:48 |
eday | can remove these tornado deps as well | 20:49 |
gundlach | if you make a branch lemme know and i'll review it asap | 20:49 |
eday | vishy: I assume you guys still use/need cloudpipe? | 20:51 |
*** ohiocitrix has joined #openstack | 20:52 | |
ohiocitrix | Is there an openstack meeting coming up in a few or did I miss it? | 20:53 |
creiht | ohiocitrix: in 6 minutes | 20:54 |
creiht | in #openstack-meeting | 20:54 |
ohiocitrix | OK, will go there. appreciate it | 20:54 |
*** ohiocitrix is now known as spectorclan | 20:55 | |
*** adjohn has joined #openstack | 20:56 | |
dendrobates | Release meeting in 4 min in #openstack-meeting | 20:56 |
*** sparkycollier has joined #openstack | 21:01 | |
*** JimCurry has joined #openstack | 21:01 | |
*** lmcdowell has joined #openstack | 21:02 | |
*** ctennis has joined #openstack | 21:03 | |
*** ctennis has joined #openstack | 21:03 | |
eday | gundlach: it looks like in the port to wsgi, the ec2 URL maps dropped the cloudpipe controllers all together: | 21:05 |
eday | (r'/cloudpipe/(.*)', nova.cloudpipe.api.CloudPipeRequestHandler), | 21:05 |
eday | (r'/cloudpipe', nova.cloudpipe.api.CloudPipeRequestHandler), | 21:05 |
gundlach | ah yeah, i added a TODO which may have gotten lost in the 70 merge attempts which said "I'm not adding cloudpipe until i hear that we absolutely need it" | 21:06 |
*** jdarcy has quit IRC | 21:06 | |
eday | hmm, ok. probably should have confirmed before it hit trunk :) | 21:06 |
gundlach | yeah, i hadn't realized how much of the code wasn't covered by unittests. | 21:07 |
gundlach | pardon me for leaving so much stuff messy in this merge, apparently -- the merge has been a real headache :) | 21:07 |
eday | yeah.. testing def needs to be improved | 21:07 |
zul | soren: i noticed in the trunk you need a newer python-eventlet are you going to ask for a FFE? | 21:11 |
soren | zul: I expect to, yes. | 21:12 |
zul | soren: k thanks | 21:12 |
jaypipes | vishy: if time, #openstack-meeting. would be good to get status updates from anso :) | 21:15 |
zul | soren: if you need help let me know | 21:15 |
soren | zul: Will do, thanks. | 21:16 |
*** scottie has quit IRC | 21:17 | |
*** cloud0 has quit IRC | 21:20 | |
*** rlucio has joined #openstack | 21:22 | |
*** kevnfx has joined #openstack | 21:22 | |
*** scottie has joined #openstack | 21:28 | |
*** jkakar has quit IRC | 21:31 | |
*** annegentle has quit IRC | 21:32 | |
*** jkakar has joined #openstack | 21:33 | |
*** gustavomzw has joined #openstack | 21:41 | |
vishy | eday: we def. need cloudpipe but we are still using cloud.py at the moment | 21:41 |
vishy | gundlach: cloudpipe endpoint could be its own app | 21:42 |
gundlach | vishy: fine with me; i don't need to get cloudpipe off of tornado, i just needed to get the API off of tornado. if cloudpipe doesn't expose an API, we're set. | 21:43 |
gundlach | (public API) | 21:43 |
gundlach | vishy: re cloud.py -- do you mean api.ec2.cloud.py? Be aware that's off of Twisted now | 21:43 |
eday | vishy, gundlach: It looks like we'll probably want to do api/cloudpipe, and then create a generic run_instances function somwhere that both ec2/rs/cloudpipe can call | 21:43 |
gundlach | eday: remind me -- run_instances fires up some new virtual servers? | 21:44 |
eday | gundlach: yup | 21:44 |
eday | gundlach: cloudpipe currently takes a controller and uses that to start some, but we should have it using a lower-level function most likely | 21:45 |
eday | gundlach: we'll want to create one anyways for ec2/rackspace APIs to share | 21:45 |
gundlach | if i understand the arch correctly (and I don't ;), it seems to make sense that all 3 of ec2/rs/cloudpipe would put the same message onto the queue to start the process of spawning instances. wouldn't that be the common API they share rather than some class/function? | 21:45 |
*** spectorclan has quit IRC | 21:46 | |
*** scottie has quit IRC | 21:46 | |
gundlach | you know what, don't even bother answering my question -- i'm sure you understand better what's going on, as i have barely looked at servers impl :) | 21:46 |
eday | gundlach: yes, right now cloudpipe calls nova.endpoint.cloud.run_instances (old API call that doesn't exist now). We'll want both of those and the rackspace "start servers" call all using the same method/function/whatever to send the same message to the scheduler | 21:47 |
*** jsgotangco has quit IRC | 21:48 | |
gundlach | endpoint.cloud.run_instances == api.ec2.cloud.run_instances; maybe it's sufficient to point RS API to that function, then | 21:48 |
gundlach | (and to point cloudpipe there as well) | 21:48 |
gundlach | and, now that i'm seeing that cloud.py may be used for more than just EC2 api, maybe api.ec2.cloud should be moved to api.cloud or something | 21:49 |
eday | gundlach: yup, pretty much. although I don't think the canonoical function should live in api.ec2, probably move to nova.compute somewhere | 21:49 |
*** brd_from_italy has quit IRC | 21:50 | |
eday | gundlach: and all api/*/cloud can call nova.compute.api.run_instances() or something | 21:50 |
gundlach | sounds good to me | 21:50 |
eday | but I'm working on a branch that fixes cloudpipe and removes tornado for good | 21:51 |
vishy | so lets say that i accidentally did a bzr pull in my parent directory | 21:52 |
vishy | how do i fix it? | 21:52 |
vishy | just completely hypothetically speaking | 21:52 |
vishy | :( | 21:52 |
*** adjohn has quit IRC | 21:52 | |
vishy | and for the record it is really really dumb that bzr lets me do that | 21:53 |
gundlach | vishy: about as dumb that it lets you type 'bzr revert' in the parent directory? | 21:53 |
pvo | hypothetically... :) | 21:53 |
vishy | it does | 21:53 |
vishy | and it doesn't do anything useful | 21:53 |
vishy | it doesn't actually revert in the parent dir | 21:54 |
vishy | :( | 21:54 |
gundlach | parent dir meaning nova/, inside of which live your branches and trunk/ ? | 21:54 |
vishy | so no ideas? | 21:54 |
vishy | yup | 21:54 |
dendrobates | perhaps a question for #bzr | 21:54 |
gundlach | ah | 21:54 |
vishy | sigh ok i guess i'll clean it up by hand grumble | 21:55 |
*** gondoi has quit IRC | 21:57 | |
*** hazmat has quit IRC | 22:00 | |
*** Xenith has quit IRC | 22:03 | |
*** sparkycollier has quit IRC | 22:03 | |
*** Xenith has joined #openstack | 22:03 | |
*** sparkycollier has joined #openstack | 22:04 | |
*** gundlach has quit IRC | 22:04 | |
vishy | so user error | 22:06 |
vishy | i used bzr init instead of init-repo | 22:06 |
*** joearnol_ has joined #openstack | 22:11 | |
*** joearnold has quit IRC | 22:15 | |
*** pvo has quit IRC | 22:22 | |
*** littleidea has joined #openstack | 22:23 | |
*** kevnfx has quit IRC | 22:28 | |
*** mtaylor has joined #openstack | 22:29 | |
*** ChanServ sets mode: +v mtaylor | 22:29 | |
*** JimCurry has left #openstack | 22:33 | |
*** rnirmal has quit IRC | 22:33 | |
*** littleidea has quit IRC | 22:35 | |
*** Ryan_Lane has joined #openstack | 22:37 | |
*** abecc has quit IRC | 22:37 | |
*** sparkycollier_ has joined #openstack | 22:37 | |
*** ianw has quit IRC | 22:38 | |
*** sparkycollier has quit IRC | 22:40 | |
*** sparkycollier_ is now known as sparkycollier | 22:40 | |
*** scottie has joined #openstack | 22:41 | |
*** pharkmillups has quit IRC | 22:57 | |
*** anm_ has quit IRC | 23:10 | |
*** joearnol_ has quit IRC | 23:14 | |
*** joearnold has joined #openstack | 23:14 | |
*** Cybodog has quit IRC | 23:16 | |
*** klord has quit IRC | 23:22 | |
*** miclorb_ has joined #openstack | 23:27 | |
*** cloudmeat has quit IRC | 23:27 | |
vishy | gundlach: are you here? I'm getting test errors in S3APITestCase about twisted being activated multiple times | 23:28 |
vishy | and reactor being unclean | 23:28 |
vishy | i need eventlet 9.12? | 23:29 |
*** sophiap has quit IRC | 23:32 | |
*** sophiap has joined #openstack | 23:36 | |
rlucio | is anyone planning on updating the debian-packaging branch anytime in the near future? The branch is already pretty far out of date | 23:39 |
*** joearnol_ has joined #openstack | 23:44 | |
*** joearnold has quit IRC | 23:47 | |
*** johnbergoon has joined #openstack | 23:47 | |
vishy | rlucio: yeah, we need to do that | 23:55 |
vishy | rlucio: i was hoping to convince soren or monty to do it, but I might try and tackle it if they aren't available | 23:56 |
*** gustavomzw has quit IRC | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!