openstackhudson | Project nova-tarmac build #51,906: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51906/ | 00:02 |
---|---|---|
*** zul has quit IRC | 00:05 | |
openstackhudson | Project nova-tarmac build #51,907: STILL FAILING in 4.1 sec: http://hudson.openstack.org/job/nova-tarmac/51907/ | 00:07 |
nati | +soren: Thank you for your review for IPv6. We fixed bug. We will push code ASAP when launchpad recover. | 00:09 |
*** DubLo7 has quit IRC | 00:09 | |
*** rnirmal has joined #openstack | 00:10 | |
openstackhudson | Project nova-tarmac build #51,908: STILL FAILING in 3.4 sec: http://hudson.openstack.org/job/nova-tarmac/51908/ | 00:12 |
openstackhudson | Project nova-tarmac build #51,909: STILL FAILING in 3.3 sec: http://hudson.openstack.org/job/nova-tarmac/51909/ | 00:17 |
*** kashyapc has quit IRC | 00:20 | |
*** kashyapc has joined #openstack | 00:21 | |
openstackhudson | Yippie, build fixed! | 00:23 |
openstackhudson | Project nova-tarmac build #51,910: FIXED in 1 min 6 sec: http://hudson.openstack.org/job/nova-tarmac/51910/ | 00:23 |
*** jbaker_ has quit IRC | 00:30 | |
*** gustavomzw has joined #openstack | 00:33 | |
*** kashyapc has quit IRC | 00:34 | |
*** ccustine has quit IRC | 00:35 | |
*** pvo is now known as pvo_away | 00:36 | |
*** allsystemsarego has quit IRC | 00:37 | |
uvirtbot | New bug: #702164 in nova "error when generating novarc in get_environment_rc()" [Undecided,New] https://launchpad.net/bugs/702164 | 00:37 |
nati | soren: we fixed bug with IPv6. Would you please check it? | 00:46 |
*** rossi_j has joined #openstack | 00:50 | |
*** gustavomzw has quit IRC | 00:50 | |
*** kashyapc has joined #openstack | 00:50 | |
*** rossij has quit IRC | 00:51 | |
*** irahgel has left #openstack | 01:01 | |
*** guynaor has joined #openstack | 01:08 | |
*** guynaor has left #openstack | 01:09 | |
*** Jordandev has joined #openstack | 01:12 | |
*** rlucio has quit IRC | 01:13 | |
*** Ryan_Lane is now known as Ryan_Lane|food | 01:15 | |
*** Jordandev has quit IRC | 01:21 | |
*** mray has quit IRC | 01:23 | |
uvirtbot | New bug: #702200 in nova "ec2 api create_volume raises an exception" [High,In progress] https://launchpad.net/bugs/702200 | 01:26 |
*** Hisaharu has joined #openstack | 01:35 | |
*** dragondm has quit IRC | 01:36 | |
*** kpepple has joined #openstack | 01:37 | |
*** DubLo7 has joined #openstack | 01:41 | |
*** joearnold has quit IRC | 01:52 | |
*** nati_ has joined #openstack | 01:53 | |
*** sophiap has joined #openstack | 01:55 | |
*** nati has quit IRC | 01:56 | |
*** adiantum has joined #openstack | 02:01 | |
*** rlucio has joined #openstack | 02:03 | |
*** pvo_away is now known as pvo | 02:03 | |
openstackhudson | Project nova build #397: SUCCESS in 1 min 26 sec: http://hudson.openstack.org/job/nova/397/ | 02:03 |
openstackhudson | Tarmac: Fixes a typo in the name of a variable. | 02:03 |
*** rlucio has quit IRC | 02:07 | |
*** rnirmal has quit IRC | 02:07 | |
*** adiantum has quit IRC | 02:09 | |
*** johnpur has quit IRC | 02:10 | |
*** irahgel has joined #openstack | 02:10 | |
*** irahgel has left #openstack | 02:10 | |
*** adiantum has joined #openstack | 02:14 | |
*** ChumbyJay has quit IRC | 02:16 | |
*** drico has quit IRC | 02:18 | |
*** Ryan_Lane|food is now known as Ryan_Lane | 02:18 | |
*** hadrian_ has joined #openstack | 02:25 | |
*** Dweezahr has quit IRC | 02:25 | |
*** hadrian has quit IRC | 02:28 | |
*** hadrian_ is now known as hadrian | 02:28 | |
*** adiantum has quit IRC | 02:42 | |
*** pvo is now known as pvo_away | 02:42 | |
*** pvo_away is now known as pvo | 02:42 | |
*** adiantum has joined #openstack | 02:47 | |
*** kpepple has left #openstack | 02:51 | |
*** nelson__ has quit IRC | 03:00 | |
*** nelson__ has joined #openstack | 03:01 | |
*** ehazlett has joined #openstack | 03:05 | |
*** maplebed has quit IRC | 03:07 | |
*** hadrian has quit IRC | 03:08 | |
*** adiantum has quit IRC | 03:12 | |
*** adiantum has joined #openstack | 03:17 | |
*** skrusty has quit IRC | 03:21 | |
*** rlucio has joined #openstack | 03:21 | |
*** rlucio has quit IRC | 03:24 | |
*** pvo is now known as pvo_away | 03:32 | |
*** jdurgin has quit IRC | 03:33 | |
*** skrusty has joined #openstack | 03:35 | |
dirakx | Hi all althoug i have what it seems a fully configured nova im hitting now an eucalyptus bug https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/470355 any hints on how to solve it.. ? | 03:37 |
uvirtbot | Launchpad bug 470355 in eucalyptus "ec2-bundle-vol and ec2-upload-bundle result in non accepted manifest" [Low,Fix released] | 03:37 |
*** adiantum has quit IRC | 03:42 | |
*** gondoi has joined #openstack | 03:44 | |
*** dfg_ has quit IRC | 03:46 | |
*** kashyapc has quit IRC | 03:51 | |
*** adiantum has joined #openstack | 03:54 | |
termie | there is a test somewhere that leaks an instance in the db | 04:01 |
termie | and it is screwing up other tests | 04:01 |
termie | my next spare-time patch is going to wipe the frickin database properly | 04:01 |
termie | brute force test permutation shell script now in operation | 04:09 |
*** dirakx has quit IRC | 04:13 | |
*** sophiap has quit IRC | 04:30 | |
*** mdomsch has joined #openstack | 04:35 | |
*** kashyapc has joined #openstack | 04:37 | |
*** gondoi has quit IRC | 04:42 | |
*** omidhdl has joined #openstack | 04:44 | |
*** phymata has quit IRC | 04:55 | |
*** paultag has left #openstack | 05:03 | |
uvirtbot | New bug: #702237 in nova "FLAGS.allow_project_net_traffic is not implemented in IptablesFirewallDriver" [Undecided,New] https://launchpad.net/bugs/702237 | 05:36 |
ehazlett | i get invalid cert failed to upload kernel when trying to use an Ubuntu UEC image with the latest nova | 05:36 |
ehazlett | (installed using the nova quickstart nova.sh script) | 05:37 |
*** aimon has quit IRC | 05:48 | |
*** aimon has joined #openstack | 05:48 | |
*** elasticdog has quit IRC | 05:49 | |
*** adiantum has quit IRC | 05:57 | |
*** f4m8_ is now known as f4m8 | 05:58 | |
*** ramkrsna has joined #openstack | 06:01 | |
*** adiantum has joined #openstack | 06:02 | |
*** elasticdog has joined #openstack | 06:02 | |
*** fcarsten has quit IRC | 06:04 | |
*** adiantum has quit IRC | 06:13 | |
*** omidhdl has left #openstack | 06:17 | |
*** adiantum has joined #openstack | 06:18 | |
*** arcane has quit IRC | 06:29 | |
*** Ryan_Lane is now known as Ryan_Lane|away | 06:31 | |
*** sparkycollier has quit IRC | 06:33 | |
*** ehazlett has quit IRC | 06:36 | |
*** adiantum has quit IRC | 06:43 | |
*** adiantum has joined #openstack | 06:47 | |
*** arcane has joined #openstack | 06:49 | |
*** guigui has joined #openstack | 06:59 | |
*** adiantum has quit IRC | 07:13 | |
*** Ryan_Lane|away has quit IRC | 07:14 | |
*** adiantum has joined #openstack | 07:25 | |
*** miclorb_ has quit IRC | 07:26 | |
ttx | devcamcar: cool, I look forward to testing this. | 07:28 |
*** skrusty has quit IRC | 07:31 | |
nati_ | Hi, Is anyone knows how to change authors which has already commited in bzr. | 07:32 |
ttx | nati_: hmm, not really... could you explain why you'd need to do that ? | 07:35 |
*** nati_ has quit IRC | 07:39 | |
*** mjmac has quit IRC | 07:43 | |
*** mjmac_ has joined #openstack | 07:43 | |
*** befreax has joined #openstack | 07:44 | |
*** skrusty has joined #openstack | 07:45 | |
*** MarkAtwood has quit IRC | 07:49 | |
*** brd_from_italy has joined #openstack | 07:56 | |
*** adiantum has quit IRC | 08:07 | |
*** gaveen has joined #openstack | 08:14 | |
*** miclorb has joined #openstack | 08:19 | |
*** calavera has joined #openstack | 08:20 | |
*** rcc has joined #openstack | 08:27 | |
*** reldan has quit IRC | 08:28 | |
*** BK_man has quit IRC | 08:30 | |
*** BK_man has joined #openstack | 08:35 | |
*** opengeard has joined #openstack | 08:43 | |
*** ibarrera has joined #openstack | 08:53 | |
*** BK_man has quit IRC | 09:07 | |
*** reldan has joined #openstack | 09:10 | |
*** jt_zg has quit IRC | 09:13 | |
*** Ryan_Lane has joined #openstack | 09:18 | |
*** daleolds1 has quit IRC | 09:19 | |
*** jt_zg has joined #openstack | 09:22 | |
*** drico has joined #openstack | 09:27 | |
*** pcoca_ has joined #openstack | 09:48 | |
*** reldan has quit IRC | 09:58 | |
*** fabiand_ has joined #openstack | 09:58 | |
*** reldan has joined #openstack | 09:58 | |
*** dizz has joined #openstack | 10:03 | |
*** reldan has joined #openstack | 10:05 | |
*** miclorb has quit IRC | 10:12 | |
*** trin_cz has quit IRC | 10:18 | |
*** dizz is now known as dizz|away | 10:18 | |
*** dizz|away is now known as dizz | 10:20 | |
*** rcc_ has joined #openstack | 10:35 | |
*** rcc has quit IRC | 10:35 | |
*** irahgel has joined #openstack | 10:35 | |
*** ramkrsna has quit IRC | 10:43 | |
*** ramkrsna has joined #openstack | 10:56 | |
*** rcc_ has quit IRC | 11:01 | |
*** drico has quit IRC | 11:12 | |
*** trin_cz has joined #openstack | 11:15 | |
*** dizz is now known as dizz|away | 11:17 | |
*** allsystemsarego has joined #openstack | 11:17 | |
*** reldan has quit IRC | 11:31 | |
*** omidhdl has joined #openstack | 11:33 | |
*** reldan has joined #openstack | 11:36 | |
*** nati has joined #openstack | 11:38 | |
*** reldan has quit IRC | 11:44 | |
* soren heads lunchwards | 11:45 | |
*** reldan has joined #openstack | 11:46 | |
nati | ttx: I found the author was not correctly set in my previous branch. So I would like to fix that. | 11:49 |
*** hky has quit IRC | 11:54 | |
*** adiantum has joined #openstack | 11:57 | |
*** hazmat has quit IRC | 12:08 | |
*** adiantum has quit IRC | 12:12 | |
*** omidhdl has left #openstack | 12:18 | |
*** ctennis has quit IRC | 12:22 | |
*** DubLo7 has quit IRC | 12:25 | |
*** arthurc has joined #openstack | 12:27 | |
*** kashyapc has quit IRC | 12:30 | |
*** soeren has joined #openstack | 12:31 | |
*** soeren has quit IRC | 12:32 | |
*** soeren has joined #openstack | 12:33 | |
*** DigitalFlux has joined #openstack | 12:34 | |
*** DubLo7 has joined #openstack | 12:35 | |
*** ctennis has joined #openstack | 12:36 | |
*** ctennis has joined #openstack | 12:36 | |
*** DubLo7 has quit IRC | 12:39 | |
ttx | nati: maybe soren knows a trick for that | 12:39 |
soren | hm? | 12:39 |
soren | for what? | 12:39 |
*** nati has quit IRC | 12:44 | |
*** pvo_away is now known as pvo | 12:48 | |
sandywalsh | o/ | 12:58 |
*** ramkrsna has quit IRC | 13:00 | |
soren | sandywalsh: \o | 13:00 |
*** WonTu has joined #openstack | 13:06 | |
*** WonTu has left #openstack | 13:07 | |
*** reldan has quit IRC | 13:09 | |
ttx | soren: <nati_> Hi, Is anyone knows how to change authors which has already commited in bzr. | 13:12 |
*** DubLo7 has joined #openstack | 13:15 | |
*** reldan has joined #openstack | 13:15 | |
soren | Can't be done. | 13:18 |
*** Jordandev has joined #openstack | 13:18 | |
soren | Chocolate. Why have I no chocolate? | 13:20 |
fraggeln | soren: The store does ;) | 13:20 |
soren | But I'm here. The store is there. | 13:20 |
*** DubLo7 has quit IRC | 13:20 | |
*** DubLo7 has joined #openstack | 13:20 | |
soren | I may even have some in a cupboard. But the cupboard is there. I'm here. | 13:21 |
sandywalsh | http://mail.python.org/pipermail/web-sig/2011-January/004979.html | 13:22 |
fraggeln | soren: shame on you... | 13:23 |
fraggeln | stockpiling chocolate like a girl.. :D | 13:23 |
*** reldan has quit IRC | 13:24 | |
*** stewart has quit IRC | 13:28 | |
*** stewart has joined #openstack | 13:28 | |
*** adiantum has joined #openstack | 13:30 | |
ttx | Daviey: see https://code.launchpad.net/~termie/nova/db_migration/+merge/46073 | 13:31 |
*** omidhdl has joined #openstack | 13:33 | |
*** westmaas has joined #openstack | 13:33 | |
*** hadrian has joined #openstack | 13:34 | |
*** pvo is now known as pvo_away | 13:38 | |
*** rnirmal has joined #openstack | 13:42 | |
*** omidhdl has quit IRC | 13:52 | |
*** ppetraki has joined #openstack | 13:56 | |
*** mdomsch has quit IRC | 13:58 | |
*** mdomsch has joined #openstack | 13:58 | |
*** whaley has quit IRC | 14:00 | |
*** troytoman has joined #openstack | 14:01 | |
*** Jordandev has quit IRC | 14:01 | |
*** reldan has joined #openstack | 14:02 | |
*** whaley has joined #openstack | 14:03 | |
openstackhudson | Project nova build #398: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/398/ | 14:13 |
openstackhudson | Tarmac: Fixes bug #701055. Moves code for instance termination inline so that the manager doesn't prematurely mark an instance as deleted. Prematurely doing so causes find calls to fail, prevents instance data from being deleted, and also causes some other issues. | 14:13 |
uvirtbot | Launchpad bug 701055 in nova ""No instance for id X" error terminating instances" [High,In progress] https://launchpad.net/bugs/701055 | 14:13 |
jaypipes | FYI, Nova being translated into 7 languages already :) https://translations.launchpad.net/nova/trunk/+pots/nova | 14:16 |
uvirtbot | New bug: #702370 in nova "IptablesFirewallDriver prevents instance startup once one has been terminated" [High,Triaged] https://launchpad.net/bugs/702370 | 14:16 |
ttx | jaypipes: I plan on doing french. Soon. | 14:18 |
jaypipes | ttx: waiting! ;P | 14:18 |
jaypipes | ttx: slacker. | 14:19 |
*** dizz|away has quit IRC | 14:24 | |
*** dizz has joined #openstack | 14:26 | |
*** dendrobates is now known as dendro-afk | 14:26 | |
*** dendro-afk is now known as dendrobates | 14:27 | |
*** gaveen has quit IRC | 14:30 | |
*** burris has quit IRC | 14:31 | |
ttx | jaypipes: you, get https://code.launchpad.net/~chris-slicehost/glance/add-s3-backend_updated/+merge/45217 in :P | 14:33 |
jaypipes | ttx: will do once I get _0x44 to fix up from review ;) | 14:35 |
ttx | jaypipes: I thought he did... | 14:35 |
ttx | "Fixes suggested by JayPipes review" | 14:35 |
jaypipes | ttx: well poop. I just haven't gotten through my emails yet this morning. :P | 14:36 |
ttx | jaypipes: a-ha! who is the slacker *now* ? | 14:36 |
*** burris has joined #openstack | 14:37 | |
jaypipes | ttx: me. /me *shrugs sheepishly* | 14:37 |
ttx | heh | 14:38 |
_0x44 | jaypipes: I already did? | 14:41 |
_0x44 | jaypipes: I mentioned it yesterday | 14:41 |
_0x44 | jaypipes: http://bazaar.launchpad.net/~chris-slicehost/glance/add-s3-backend_updated/revision/29 | 14:41 |
*** dizz has left #openstack | 14:41 | |
jaypipes | _0x44: yeah, my bad bro :) | 14:42 |
_0x44 | jaypipes: No worries :) I had to check the merge-prop to see if you'd added stuff. | 14:42 |
_0x44 | jaypipes: Is boto installed on the hudson machine? | 14:42 |
jaypipes | _0x44: ya, should be. | 14:42 |
_0x44 | Awesome, thanks | 14:43 |
jaypipes | _0x44: no, thank YOU. :) | 14:43 |
_0x44 | jaypipes: No. Thank _you_ >:O | 14:43 |
_0x44 | ;) | 14:43 |
* jaypipes channels Half Baked. | 14:44 | |
ttx | Looks like we broke the 1k commits barrier over the last month for nova... "There were 1083 commits by 58 people in the last month" | 14:45 |
jaypipes | heh, nice :) | 14:45 |
jaypipes | ttx: with 950 in the past 4 days ;) | 14:45 |
ttx | nah... :) | 14:45 |
*** adiantum has quit IRC | 14:46 | |
ttx | now what does it take to get featured in the https://launchpad.net/ project list | 14:46 |
jaypipes | ttx: bribery. | 14:46 |
ttx | yes, I know, the list if hand-picked | 14:46 |
ttx | how much did it cost you to get Drizzle in there ? | 14:46 |
*** reldan has quit IRC | 14:47 | |
*** reldan has joined #openstack | 14:49 | |
*** DubLo7 has quit IRC | 14:52 | |
_0x44 | ttx: jaypipes had to go on a date with Mark Shuttleworth | 14:52 |
*** DubLo7 has joined #openstack | 14:52 | |
jaypipes | _0x44: shhh..don't tell. | 14:54 |
*** sophiap has joined #openstack | 14:54 | |
*** gondoi has joined #openstack | 14:57 | |
_0x44 | jaypipes: We're hiring a guy from Canonical to work on the OpenQuake project and he said the pictures are still up in the lobby of the head office. ;) | 14:57 |
*** f4m8 is now known as f4m8_ | 14:58 | |
jaypipes | _0x44: lol | 14:58 |
openstackhudson | Project nova build #399: SUCCESS in 1 min 26 sec: http://hudson.openstack.org/job/nova/399/ | 14:58 |
openstackhudson | Tarmac: Fixes broken call to __generate_rc in auth manager. | 14:58 |
*** adiantum has joined #openstack | 14:58 | |
*** pvo_away is now known as pvo | 15:00 | |
ttx | jaypipes: ooh, *that* picture ! | 15:00 |
*** adiantum has quit IRC | 15:05 | |
*** reldan has quit IRC | 15:06 | |
*** reldan has joined #openstack | 15:17 | |
*** pvo is now known as pvo_away | 15:18 | |
soren | It would be really awesome if someone who actually know something at all about ipv6 could review https://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/46103 | 15:20 |
soren | All I know is that it's the thing that's going to save the Internet. Other than that, I'm blank. | 15:20 |
*** adiantum has joined #openstack | 15:23 | |
*** hazmat has joined #openstack | 15:24 | |
colinnich | swift - anyone know of any reason why my proxy logs aren't going to /var/log/swift? I've followed the log processing guide, and I'm sure everything's the same as last time I did it... Anything changed? | 15:25 |
*** Facefoxdotcom has joined #openstack | 15:25 | |
colinnich | I have log_facility = LOG_LOCAL1 in my proxy-server.conf and think I have set up syslog-ng correctly | 15:25 |
notmyname | colinnich: you are using the instructions in the "running the stats system with saio"? | 15:26 |
colinnich | notmyname: yeah | 15:27 |
*** fabiand_ has left #openstack | 15:27 | |
notmyname | colinnich: for fun, change log_facility = LOG_LOCAL0 and also make the relevant changes to syslog-ngc.conf | 15:28 |
*** pvo_away is now known as pvo | 15:28 | |
soren | dabo: I'd like to repeat my offer of sharing the code I have that uses M2Crypto that can succesfully exchange keys with your SimpleDH. | 15:29 |
notmyname | I may have run in to that issue recently, but I was in the middle of other stuff and just needed it to get working. I changed it to log facility 0 and things worked. I'm not sure why it has issues with log facility 1 | 15:29 |
dabo | soren: please!! | 15:29 |
*** hggdh has joined #openstack | 15:29 | |
soren | dabo: Hang on. | 15:29 |
*** pvo is now known as pvo_away | 15:29 | |
soren | dabo: http://pastebin.com/itX90g3W | 15:31 |
soren | dabo: I'm sure there's a prettier way to implement bin_to_dec. | 15:31 |
colinnich | notmyname: working now, thanks as always :-) | 15:31 |
dabo | soren: thanks. Looking at it now | 15:31 |
notmyname | colinnich: good. I need to take some time to figure out what's wrong in the docs | 15:33 |
colinnich | notmyname: I have another system running stock 1.1 which is fine with level 1, strange | 15:34 |
notmyname | hmm | 15:34 |
colinnich | notmyname: this one is running latest | 15:34 |
*** jeffdarcy has joined #openstack | 15:34 | |
*** zul has joined #openstack | 15:35 | |
*** Ryan_Lane is now known as Ryan_Lane|away | 15:38 | |
*** rnirmal has quit IRC | 15:39 | |
*** rnirmal has joined #openstack | 15:40 | |
*** dabo has quit IRC | 15:40 | |
*** jeffdarcy has quit IRC | 15:44 | |
*** dabo has joined #openstack | 15:44 | |
*** hggdh has quit IRC | 15:46 | |
*** hggdh has joined #openstack | 15:47 | |
*** opengeard has quit IRC | 15:47 | |
*** troytoman has quit IRC | 15:50 | |
*** dragondm has joined #openstack | 15:53 | |
*** allsystemsarego has quit IRC | 15:55 | |
*** westmaas has quit IRC | 15:57 | |
*** dubsquared1 has joined #openstack | 16:02 | |
*** jimbaker has joined #openstack | 16:06 | |
soren | ttx: Patch for https://bugs.launchpad.net/nova/+bug/702370 up. | 16:11 |
uvirtbot | Launchpad bug 702370 in nova "IptablesFirewallDriver prevents instance startup once one has been terminated" [High,Triaged] | 16:11 |
* ttx tests | 16:12 | |
*** pvo_away is now known as pvo | 16:14 | |
*** calavera has quit IRC | 16:15 | |
sandywalsh | review anyone? | 16:18 |
sandywalsh | taking easy-api | 16:19 |
*** pvo is now known as pvo_away | 16:19 | |
ttx | sandywalsh: ipv6 and live-migration could use some extra love | 16:23 |
sandywalsh | ttx, ok, I'll do live-migration right after easy-api (I've been meaning to look at easy-api anyway). | 16:23 |
ttx | soren: your patch apparently fixes the bug alright. Reviewing branch now | 16:24 |
ttx | sandywalsh: your choice ;) | 16:24 |
*** phymata has joined #openstack | 16:24 | |
*** reldan has quit IRC | 16:26 | |
*** pvo_away is now known as pvo | 16:29 | |
*** matclayton has joined #openstack | 16:30 | |
trin_cz | anyone: how do I run an instance without a ramdisk? euca-run-instances cannot find ari-11111 ... which I assume is a default value for the ramdisk id | 16:30 |
*** reldan has joined #openstack | 16:34 | |
soren | trin_cz: You just have to run a newer version of Nova. | 16:35 |
*** reldan has quit IRC | 16:36 | |
*** reldan has joined #openstack | 16:37 | |
colinnich | another swift question - what is the sensible maximum number of objects in a container? | 16:38 |
*** dfg_ has joined #openstack | 16:38 | |
*** trin_cz has quit IRC | 16:38 | |
*** brd_from_italy has quit IRC | 16:42 | |
*** DigitalFlux has quit IRC | 16:49 | |
*** ccustine has joined #openstack | 16:54 | |
creiht | colinnich: depends on how you define sensible :) | 16:56 |
creiht | also depends on how you are using the container | 16:56 |
creiht | the container is a sqlite db, so as you add more and more objects, the db gets larger | 16:56 |
creiht | and requests that require hits to the container get a little slower | 16:56 |
colinnich | creiht: in the context of a backup client - so storing potentially millions of file parts | 16:57 |
creiht | So listings will get a bit slower | 16:57 |
soren | ttx: Thanks for catching the version.py stuff on that branch. Fixed now. | 16:57 |
ttx | re-reviewing | 16:58 |
creiht | object PUTs can get a little slower, and the consistency window gets a little longer | 16:58 |
*** mdomsch has quit IRC | 16:58 | |
creiht | The problem gets worse if you are doing a lot of PUTs concurrently to the same container | 16:58 |
creiht | Our testing has indicated that it becomes noticeable on the order of millions of objects in a container | 16:59 |
*** DubLo7 has quit IRC | 17:00 | |
colinnich | creiht: ok, thanks | 17:00 |
dabo | soren: your DH code has been implemented and pushed. | 17:01 |
soren | \o/ | 17:01 |
creiht | colinnich: does the backup client upload the data with any concurrency? | 17:02 |
*** troytoman has joined #openstack | 17:03 | |
*** rnirmal has quit IRC | 17:03 | |
colinnich | creiht: not sure, it doesn't exist yet - was talking to someone earlier who had an idea and was considering swift as a back end | 17:03 |
creiht | ahh | 17:04 |
colinnich | creiht: his only question I couldn't answer :-) | 17:04 |
creiht | colinnich: we also have some ideas on how to distribute the containers accross the cluster | 17:04 |
creiht | which would make containers unlimited | 17:05 |
creiht | colinnich: also, as a side note, if he can work the backup client to distribute the data among several containers, then those problems go away | 17:05 |
* ttx breaks for dinner, bbl | 17:06 | |
creiht | as swift doesn't limit the containers like s3 does | 17:06 |
colinnich | creiht: ok, I'll pass that on, thanks | 17:06 |
*** timrc has quit IRC | 17:07 | |
*** larstobi has joined #openstack | 17:07 | |
*** guigui has quit IRC | 17:11 | |
* soren goes to dinner | 17:12 | |
*** Ryan_Lane|away is now known as Ryan_Lane | 17:13 | |
*** timrc has joined #openstack | 17:16 | |
*** westmaas has joined #openstack | 17:19 | |
*** pvo is now known as pvo_away | 17:20 | |
*** omidhdl has joined #openstack | 17:27 | |
*** rlucio has joined #openstack | 17:33 | |
*** mjmac_ is now known as mjmac | 17:38 | |
*** rlucio has quit IRC | 17:38 | |
*** ibarrera has quit IRC | 17:44 | |
*** joearnold has joined #openstack | 17:46 | |
*** MarkAtwood has joined #openstack | 17:50 | |
*** maplebed has joined #openstack | 17:55 | |
*** jdurgin has joined #openstack | 17:58 | |
*** pvo_away is now known as pvo | 18:05 | |
anticw | what is the main branch/location on launchpad for swift (where releases come from)? | 18:05 |
creiht | anticw: lp:swift | 18:05 |
creiht | https://launchpad.net/swift/trunk | 18:06 |
anticw | thanks, that's what i was after ... some loads lead to .../~swift/ which has no active branches, and there is swift-core as well ... which has some other work | 18:07 |
creiht | anything prefixed ~ is a group in lp | 18:07 |
creiht | and yeah it can get a bit confusing | 18:07 |
creiht | compounded with the large number of groups that we have | 18:08 |
*** piken_ has quit IRC | 18:18 | |
openstackhudson | Project nova build #400: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/400/ | 18:23 |
openstackhudson | Tarmac: Fixes related to how EC2 ids are displayed and dealt with. | 18:23 |
openstackhudson | Additionally adds two flags that define a template string that is used for the internal naming of things (like the volume name of a logical volume on disk), default being similar to the EC2 format, so that the ids are easy to match while testing when you may need to manually delete or check something. | 18:23 |
*** DubLo7 has joined #openstack | 18:23 | |
*** irahgel has left #openstack | 18:24 | |
*** johnpur has joined #openstack | 18:24 | |
*** ChanServ sets mode: +v johnpur | 18:24 | |
*** kpepple has joined #openstack | 18:27 | |
*** Cybo has joined #openstack | 18:29 | |
*** aliguori has quit IRC | 18:30 | |
*** aimon_ has joined #openstack | 18:32 | |
*** reldan has quit IRC | 18:33 | |
*** dirakx has joined #openstack | 18:34 | |
*** adiantum has quit IRC | 18:34 | |
jeremyb | creiht: ~ can also mean user (not group) right? | 18:36 |
*** aimon has quit IRC | 18:36 | |
*** aimon_ is now known as aimon | 18:36 | |
jaypipes | creiht, wreese: lemme know if you guys want/need any help automating i18n stuff for Swift. | 18:37 |
*** opengeard has joined #openstack | 18:41 | |
sandywalsh | termie around? | 18:43 |
*** ccustine has quit IRC | 18:44 | |
*** omidhdl has quit IRC | 18:45 | |
*** ugarit has joined #openstack | 18:48 | |
*** ccustine has joined #openstack | 18:48 | |
*** ccustine has quit IRC | 18:54 | |
*** pvo is now known as pvo_away | 18:55 | |
mtaylor | soren: your patch has been merged into hudson trunk - but I have taken credit for it | 18:59 |
*** hggdh has quit IRC | 19:00 | |
*** aliguori has joined #openstack | 19:05 | |
openstackhudson | Project nova build #401: SUCCESS in 1 min 27 sec: http://hudson.openstack.org/job/nova/401/ | 19:08 |
openstackhudson | Tarmac: Add a new method to firewall drivers to tell them to stop filtering a particular instance. Call it when an instance has been destroyed. | 19:08 |
openstackhudson | Use dict()s (keyed off id's) instead of set()s for keeping track of instances and security groups in the iptables firewall driver. __eq__ for objects from sqlalchemy fetched in different sessions doesn't work as expected, so I needed to explicitly filter on ID. | 19:08 |
ugarit | is if one installs openstack will one have an identical platform as http://nebula.nasa.gov | 19:09 |
ugarit | let me try one more time :-) if one installs openstack will one have an identical platform to http://nebula.nasa.gov? | 19:11 |
uvirtbot | New bug: #702549 in nova "OS API looking up wrong instance ID for 'show'" [Undecided,New] https://launchpad.net/bugs/702549 | 19:11 |
kpepple | ugarit: you should ask some of the anso labs guys about this — maybe vishy ? | 19:12 |
vishy | ugarit: no | 19:12 |
kpepple | ugarit: AFAIK, no openstack != nebula | 19:12 |
vishy | ugarit: you will have a comparable backend | 19:12 |
ugarit | vishy : so what wud be different? | 19:13 |
vishy | ugarit: no web interface for one | 19:13 |
ugarit | ah | 19:13 |
ugarit | so web interface is not open source? | 19:14 |
vishy | ugarit: you need this too https://launchpad.net/openstack-dashboard | 19:14 |
vishy | :) | 19:14 |
vishy | and you will probably need to do some customization | 19:14 |
ugarit | is the dashboard a web interface? | 19:14 |
vishy | nebula is based on openstack | 19:14 |
vishy | yes | 19:14 |
ugarit | thanks vishy | 19:15 |
ugarit | can openstack instanciate MS Windows VMs? | 19:15 |
ugarit | so nebula's web interface is not https://launchpad.net/openstack-dashboard ? | 19:16 |
vishy | ugarit: bexar can, but it will require a customized interface | 19:16 |
vishy | ugarit: nebula is based on openstack + openstack-dashboard | 19:16 |
vishy | s/interface/windows instance/ | 19:16 |
*** reldan has joined #openstack | 19:17 | |
ugarit | what's bexar? | 19:17 |
eday | bexar is the name for the current release we're working on | 19:18 |
ugarit | why did NASA call it nebula? People tend to now confuse it with http://opennebula.org | 19:19 |
*** Ryan_Lane is now known as Ryan_Lane|away | 19:21 | |
*** arthurc has quit IRC | 19:22 | |
*** befreax has quit IRC | 19:23 | |
ugarit | vishy what kind of customization would one have to perform on openstack+openstack-dashboard ? | 19:23 |
vishy | ugarit: don't ask me about the naming | 19:25 |
ugarit | :-) | 19:25 |
vishy | ugarit: if you get them both running together you will have an approximation of nebula | 19:25 |
*** allsystemsarego has joined #openstack | 19:25 | |
ugarit | cool | 19:25 |
vishy | ugarit: gl | 19:25 |
ugarit | ? | 19:26 |
ugarit | is openstack OCCI compliant ? | 19:26 |
*** daleolds has joined #openstack | 19:26 | |
kpepple | ugarit: are you asking if nova implements the OCCI api ? | 19:31 |
ugarit | yes | 19:32 |
kpepple | ugarit: in progress. see here - http://occi-wg.org/2010/12/14/occi-in-openstack/ | 19:32 |
ugarit | great. thanks kpepple | 19:33 |
*** dendrobates is now known as dendro-afk | 19:35 | |
*** dendro-afk is now known as dendrobates | 19:35 | |
*** sophiap has quit IRC | 19:39 | |
*** ccustine has joined #openstack | 19:47 | |
*** fabiand_ has joined #openstack | 19:47 | |
*** Tushar has joined #openstack | 19:52 | |
*** pvo_away is now known as pvo | 19:52 | |
*** pvo is now known as pvo_away | 19:52 | |
*** pvo_away is now known as pvo | 19:52 | |
*** masumotok has joined #openstack | 19:55 | |
*** skrusty has quit IRC | 19:56 | |
sandywalsh | damn, I wish you could edit merge-prop comments after submission. | 19:57 |
masumotok | Hi, does anyone know that how to write down more than 2 reviewers when I submit merge proposal? | 19:58 |
masumotok | at the last IRC meeting, I think ttx said that adding him as a reviewer, but other people already gave me review, so as for me, I need to write down 2 reviewers to next merge-proposal | 20:00 |
*** fabiand_ has quit IRC | 20:01 | |
sandywalsh | xtoddx, when you push your changes please let me know. But, as I mentioned, even when I fixed it locally I still got the other exception: 'module' object has no attribute 'API' | 20:01 |
ugarit | when is the next openstack summit? | 20:04 |
*** adiantum has joined #openstack | 20:04 | |
sandywalsh | last was in Nov and it's every 6 months | 20:05 |
*** brd_from_italy has joined #openstack | 20:06 | |
ugarit | do I need to use PXE boot with Openstack ? | 20:06 |
ugarit | nvm. not a good question | 20:07 |
*** fabiand_ has joined #openstack | 20:07 | |
creiht | jeremyb: ahh true | 20:07 |
*** kpepple has left #openstack | 20:07 | |
creiht | jaypipes: I think redbo has most of the stuff we need in swift for the i18n logging, so I would get with him to figure out the next steps | 20:07 |
creiht | and thanks :) | 20:08 |
glenc | ugarit: openstack summit is april 26-29 in santa clara, ca | 20:09 |
ugarit | nice location :-) | 20:09 |
ugarit | is there a link for that info? | 20:09 |
*** skrusty has joined #openstack | 20:10 | |
Xenith | ugarit: http://wiki.openstack.org/Summit/Spring2011 | 20:10 |
ugarit | Xenith thanks | 20:11 |
jeremyb | any examples of people that have hit a wall with swift write or read throughput? | 20:13 |
* jeremyb was chatting elsewhere about how to expose swift to other services (not openstack) and the outside world | 20:14 | |
*** matclayton has left #openstack | 20:14 | |
jeremyb | 08 05:06:45 <+redbo> I wouldn't expose it directly to public traffic | 20:14 |
*** trin_cz has joined #openstack | 20:15 | |
Tushar | Today is a featurefreeze day, can someone please review ipv6 branch? | 20:15 |
Tushar | https://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/46103? | 20:15 |
jeremyb | does that mean simple load balancing isn't enough? | 20:15 |
dragondm | Tushar: I've been looking at the ip6 code, I can put in a review | 20:15 |
creiht | jeremyb: in the end, the throughput is going to be determined by how many proxies and storage nodes you have | 20:16 |
jeremyb | creiht: sure | 20:16 |
jeremyb | creiht: but if you're at 20% capacity for storage then adding storage nodes may not be the best solution | 20:17 |
creiht | ahh | 20:17 |
Tushar | dragondm: thanks. | 20:17 |
jeremyb | all speculation | 20:17 |
creiht | jeremyb: so as a reference point | 20:17 |
creiht | we currently have a layer of proxy servers that are public facing | 20:17 |
creiht | and a layer of storage nodes that are on the private network | 20:18 |
jeremyb | i know, and you can't tell me anything about them | 20:18 |
creiht | lol | 20:18 |
creiht | what I can tell you is... | 20:18 |
creiht | :) | 20:18 |
creiht | For PUT throughput, given enough bandwidth, our proxies max out at 1500-2000 PUTs/s | 20:18 |
creiht | so each proxy we add gives us the capability to handle another 1500-2000 PUTs/s | 20:19 |
creiht | at that rate for PUTs, a single proxy can saturate around 4 storage nodes | 20:19 |
jeremyb | any rough idea how many proxies there are? | 20:19 |
creiht | that's the part I can't talk about right now :/ | 20:20 |
*** fabiand_ has quit IRC | 20:20 | |
jeremyb | you can't say 10 vs. 10k ? :/ | 20:20 |
creiht | GETs are a lot faster | 20:20 |
creiht | lol | 20:20 |
jeremyb | (all of this is per DC?) | 20:20 |
creiht | yes | 20:21 |
creiht | jeremyb: perhaps it might be easier to talk about a config you are looking at, or what type of performance you are looking for, and we can work from that? | 20:21 |
jeremyb | creiht: well i was mainly not asking for myself, asked nelson__'s partner to /j a few mins ago | 20:22 |
jeremyb | creiht: they're currently attempting to figure out what the volume is (including peak max) for wikimedia with the current system | 20:23 |
creiht | Another thing is that we try to optimize for PUT performance as much as possible, since GET performance is going to be way faster | 20:23 |
creiht | and DEL performance is pretty similar to PUT performance | 20:24 |
*** larstobi has quit IRC | 20:27 | |
burris | ok, I see stats collector and log uploader has to run on each account server, and syslog-ng + log uploader has to run on each proxy server, do we just run one log-stats-collector per cluster? | 20:30 |
vishy | soren: have you had a chance to look at cow-images branch yet? Do you think it should be pushed as well? | 20:30 |
*** ccustine has quit IRC | 20:30 | |
creiht | notmyname: -^ | 20:30 |
*** hggdh has joined #openstack | 20:31 | |
notmyname | burris: yes | 20:31 |
burris | thx | 20:32 |
soren | vishy: I glanced at it when you proposed it, but then forgot again. | 20:33 |
tr3buchet | i got tired of looking up the 9 commands it takes to register an image with euca, made a fairly useful script for taking care of it: https://github.com/tr3buchet/euca-register-image | 20:33 |
soren | vishy: I have two other reviews I've promised to do. If I'm still alive afterwards, I'll review the cow thing. | 20:34 |
vishy | k thix | 20:34 |
*** drico has joined #openstack | 20:34 | |
soren | vishy: I wonder, though, why you stopped passing an execute argument to the various methods? | 20:35 |
vishy | soren: because we only have one execute now, | 20:35 |
vishy | soren: i didn't see the point in passing it all over the place | 20:35 |
soren | vishy: Oh, that's why it was passed as an argument? | 20:36 |
soren | vishy: I thought it was for testing purposes. | 20:36 |
uvirtbot | New bug: #702580 in nova "Degister image with invalid Id raises UnknownError" [Undecided,New] https://launchpad.net/bugs/702580 | 20:36 |
vishy | soren: no it was because we had utils.execute and twisted process execute | 20:36 |
soren | Oh. | 20:36 |
soren | Hm. Ok. | 20:36 |
soren | well, I guess once I start adding tests for all this stuff I can add it back. | 20:36 |
burris | will multiple stats collector processes clobber each other? can we run multiple ones? | 20:38 |
vishy | sren: we should probably actually make execute in utils stubbable | 20:40 |
vishy | soren: ^^ | 20:40 |
*** sophiap has joined #openstack | 20:40 | |
soren | vishy: Yeah. | 20:41 |
soren | vishy: Hey, wanna trade? | 20:41 |
soren | vishy: If you review https://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/46103, I'll review the cow thing instead. | 20:41 |
vishy | soren: I assume that your iptables sec groups will not work with ipv6 | 20:41 |
vishy | i'm reviewing it right now | 20:41 |
soren | vishy: Yeah, it won't work. | 20:42 |
soren | vishy: I don't think it's hard, it's just that I needed the ipv6 address of the guest to be able to set up the rules, and I didn't want to define how that datamodel would look like, seeing as I haven't a clue about ipv6. | 20:42 |
soren | vishy: ...but once ipv6 lands, it should be reasonably straightforward to add iptables filtering for it. | 20:43 |
soren | vishy: Cool beans. I'll review the cow stuff. | 20:43 |
notmyname | burris: sorry. been busy with some other stuff and I haven't been paying attention | 20:46 |
burris | no sweat, I'm getting much more than I paid for | 20:46 |
*** fabiand_ has joined #openstack | 20:47 | |
*** littleidea has joined #openstack | 20:47 | |
ttx | masumotok: once the proposal is created, you can add reviewers | 20:47 |
notmyname | burris: so it looks like 2 log-stats-collector processes can clobber each other. I have a merge proposal to fix that, but it hasn't been approved yet | 20:47 |
notmyname | burris: but each copy will spawn enough child processes to max the machine | 20:48 |
ttx | masumotok: using the "request a review" button at the bottom of the reviewers list | 20:48 |
soren | vishy: Have you benchmarked qemu-nbd much? | 20:48 |
soren | vishy: Well, qemu-nbd itself is probably fine, but how much overhead does it add to go through it? | 20:49 |
vishy | soren: not sure, but it just for compatibility in key injection. IMO key_injection should ultimately be removed | 20:50 |
burris | notmyname, i see the bug about firing off another collector before the previous was done, but what about two collectors running at the same time on separate machines? do we have to pick a single machine to run this on at a time? | 20:50 |
vishy | soren: definitely it is a net win in time to launch | 20:50 |
soren | vishy: Oh, I didn't realise that's what it was for. Haven't made it that far in the patch yet. | 20:51 |
vishy | soren: meaning that the time saved by using CoW dramatically outweighs the overhead of qemu-ndb | 20:51 |
soren | vishy: But yes, key injection is a crime. It needs to stop. | 20:51 |
notmyname | burris: that won't cause a problem, but it isn't multi-machine aware. both will end up doing the same thing (unless you have 2 separate stats accounts too) | 20:51 |
vishy | s/ndb/nbd/ | 20:51 |
soren | vishy: It's only used during bootstrapping, you say? | 20:52 |
vishy | soren: i think it only makes sense in dev mode, and now that we have ami-tty and working metadata i'm not sure it makes sense at all | 20:52 |
notmyname | burris: future versions should be able to be scaled out, but it's good enough for now that we can use it to parse 1 hour's logs in less than an hour | 20:52 |
masumotok | ttx: understand. I added you as a reviewer. please review live migration branch | 20:52 |
vishy | soren: correct it mounts it only to inject the key/network | 20:52 |
ttx | masumotok: you don't need my approval before FeatureFreeze hits. I'm not a -core member so I don't count towerds the two necessary approvals | 20:53 |
vishy | soren: going to lunch bbs | 20:53 |
ttx | masumotok: there still is ~11h to the freeze | 20:53 |
*** adiantum has quit IRC | 20:53 | |
vishy | masumotok: I will review that branch after lunch | 20:53 |
masumotok | vishy: thank you! I'm looking forward to check your comments! | 20:54 |
soren | vishy: If it's only for injection, what's the point? | 20:54 |
soren | vishy: Oh, to support qcow2? | 20:54 |
soren | vishy: Right, that makes sense. Ok. | 20:55 |
masumotok | ttx: thanks! I almost forget I need two approval.. :) | 20:57 |
*** ugarit has quit IRC | 20:57 | |
*** adiantum has joined #openstack | 20:58 | |
*** adiantum has quit IRC | 21:05 | |
*** littleidea has quit IRC | 21:06 | |
*** brd_from_italy has quit IRC | 21:08 | |
*** adiantum has joined #openstack | 21:11 | |
*** nati has joined #openstack | 21:13 | |
*** reldan has quit IRC | 21:14 | |
*** johnpur has quit IRC | 21:16 | |
*** Cybo has quit IRC | 21:18 | |
*** Ryan_Lane|away is now known as Ryan_Lane | 21:19 | |
*** hadrian has quit IRC | 21:20 | |
*** hadrian has joined #openstack | 21:20 | |
*** fabiand_ has left #openstack | 21:21 | |
*** adiantum has quit IRC | 21:24 | |
*** DubLo7 has quit IRC | 21:24 | |
*** adiantum has joined #openstack | 21:28 | |
*** johnpur has joined #openstack | 21:34 | |
*** ChanServ sets mode: +v johnpur | 21:34 | |
*** flipzagging has joined #openstack | 21:37 | |
*** Dweezahr has joined #openstack | 21:37 | |
*** adiantum has quit IRC | 21:38 | |
*** sophiap has quit IRC | 21:38 | |
*** miclorb_ has joined #openstack | 21:42 | |
ttx | soren: Looks like https://code.launchpad.net/~rconradharris/nova/xs-snap-return-image-id-before-snapshot/+merge/45494 just need a flip of status | 21:42 |
ttx | soren: if you agree with the latest answer to your question | 21:42 |
*** adiantum has joined #openstack | 21:44 | |
ttx | same for https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 | 21:44 |
*** reldan has joined #openstack | 21:48 | |
*** Jbain has quit IRC | 21:49 | |
*** daleolds has quit IRC | 21:49 | |
*** rlucio has joined #openstack | 21:51 | |
*** larstobi has joined #openstack | 21:52 | |
* ttx goes to bed | 21:53 | |
*** ctennis has quit IRC | 21:53 | |
nati | I fixed ipv6 branch just now. | 21:55 |
nati | I'm worrying about ipv6 branch will not be merged in today. Muu.. What should I next? | 21:56 |
soren | There's an exception process. | 22:07 |
soren | nati: ^ | 22:07 |
*** adiantum has quit IRC | 22:07 | |
*** aimon has quit IRC | 22:07 | |
soren | It's not that I don't want to review it. I just don't really feel qualified to do so. ipv6 is black magic to me. | 22:07 |
*** aimon has joined #openstack | 22:08 | |
soren | I've onced pinged a machine over ipv6. That's the only ipv6 experience I have. | 22:08 |
*** ctennis has joined #openstack | 22:08 | |
*** ctennis has joined #openstack | 22:08 | |
nati | soren: Thank you for your review. I'll try exception process if our code is not merged. | 22:09 |
soren | vishy: cow branch reviewed. A couple of things to address. If you can do so within the next half hour or so, I'll be happy to approve. | 22:09 |
*** aimon_ has joined #openstack | 22:10 | |
vishy | soren: thanks I saw it. | 22:10 |
vishy | I will be fixing asap | 22:10 |
soren | \o/ | 22:10 |
*** aimon has quit IRC | 22:12 | |
*** aimon_ is now known as aimon | 22:12 | |
*** adiantum has joined #openstack | 22:12 | |
*** troytoman has quit IRC | 22:14 | |
nati | soren: A professional of IPv6 in my company ( NTT which is teleco in Japan) also reviewed IPv6 code. He was worked NGN (Which is very large scale NW in Japan). So we think code is ok on aspect of IPv6. | 22:15 |
nati | soren: In addition, we wrote smoke test for IPv6. | 22:17 |
nati | Anyway, would you please feel free to ask about IPv6 for us. | 22:18 |
*** westmaas has quit IRC | 22:18 | |
*** allsystemsarego has quit IRC | 22:19 | |
soren | nati: The one question I had seems to have not been answered. The one about "link local" addresses. | 22:19 |
nati | Sorry! I'll check it. | 22:20 |
soren | sandywalsh: around? | 22:20 |
nati | get_my_linklocal is intendted to get ipv6 linklocal address. | 22:21 |
soren | sandywalsh: Have you concerns been adressed on https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 ? | 22:21 |
soren | nati: Yes, that's what the method name suggests. However, that doesn't seem to be what the method does. | 22:22 |
soren | nati: Again, this is me not knowing anything about ipv6, so I looked it up on Wikipedia. Wikipedia says that link local addresses in ipv6 are in the fe80::/10 range. | 22:23 |
soren | nati: Yet, as far as I can tell, that method just gets the ipv6 address of a given interface. | 22:23 |
nati | OK I get your point . Should I change method name for get_linklocal_v6 ? | 22:23 |
soren | I don't know :) | 22:24 |
soren | I don't know if it's really supposed to find link local addresses or if it's supposed to find the ipv6 address. | 22:24 |
soren | It just seems to me that what the method name suggests the method is going to do doesn't match what it actually does. | 22:25 |
soren | I think. | 22:25 |
soren | Again, ipv6 == magic. | 22:25 |
nati | It supposed to findd link local because we use inet6 and Scope:Link in regular expressions. | 22:25 |
soren | vishy: How's your ipv6-fu? | 22:25 |
uvirtbot | New bug: #702636 in nova "Adding existing group gives weird error message" [Undecided,New] https://launchpad.net/bugs/702636 | 22:26 |
vishy | soren: can't say i know much about it :( | 22:26 |
soren | nati: Can you give me an example output of "ip -f inet6 -o addr show %s" % interface ? | 22:26 |
soren | nati: Maybe that will help me understand. | 22:26 |
*** ppetraki has quit IRC | 22:27 | |
nati | soren: OK I write a example on https://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/46192 | 22:28 |
nati | We can get ipv6 address using -f inet6 option. | 22:29 |
nati | The scope in output means type of ipv6 address,and "link" is linklocal. | 22:29 |
Xenith | soren: It grabs the link-local address, as far as I can tell. | 22:31 |
Xenith | "3: eth0 inet6 fe80::21a:92ff:fe2f:8181/64 scope link \ valid_lft forever preferred_lft forever" | 22:31 |
Xenith | the "scope link" bit is the key. | 22:31 |
Xenith | Which is what the function matches on. | 22:32 |
*** phymata has quit IRC | 22:32 | |
nati | Xennith : yes. Thank you :) | 22:32 |
termie | sandywalsh: i am now | 22:33 |
termie | sandywalsh: i didn't update the blueprint :( | 22:33 |
termie | sandywalsh: got sidetracked | 22:33 |
termie | by packaging and installation hackation | 22:33 |
soren | termie: Where are you located nowadays? | 22:36 |
termie | soren: San Francisco, California | 22:36 |
soren | termie: Ah. Didn't realise you left Berlin :) | 22:36 |
Xenith | termie: Lovely weather today, no? :) | 22:37 |
nati | It is also lovely in Japan too. But it is very cold now :) | 22:38 |
termie | Xenith: at least rain usually means it isn't so cold | 22:39 |
termie | soren: aye, was a whole two months in Tokyo between there and here, too | 22:40 |
soren | termie: Fascinating. | 22:40 |
termie | soren: isn't it though? I'm sure we could talk all day about it. | 22:40 |
termie | also, foam bath stickers | 22:41 |
termie | never had them as a kid but i am on a mission to purchase them for small people i know | 22:41 |
soren | Foam bath stickers? | 22:41 |
termie | they are foam and when they get wet the stick to the walls in the bathroom next to the tub | 22:42 |
soren | Awesome. | 22:42 |
termie | so you can make stuff and play with it | 22:42 |
termie | i had i think a scuba diver dude that never really worked well | 22:43 |
termie | and maybe a dinosaur | 22:43 |
termie | but these stickers would have made my day | 22:43 |
termie | i had i think a scuba diver dude that never really worked well | 22:43 |
termie | erm | 22:43 |
termie | http://www.bestdressedkids.com/proddetail.php?prod=146119 | 22:43 |
soren | I don't even remember being bathed. | 22:44 |
nati | soren: Thank you for your approve | 22:46 |
soren | Sure. | 22:47 |
Xenith | IPv6 is my thing, so its one of the aspects of nova I plan to keep track of. Just gotta learn python. :) | 22:47 |
nati | Xenith : Oh really? It sounds great! :) | 22:48 |
nati | vish: If you don't mind, please approve IPv6 branch. | 22:49 |
Xenith | I work at Hurricane Electric, kind of a job requirement. :) | 22:49 |
*** rlucio has quit IRC | 22:49 | |
soren | Xenith: Oh. | 22:51 |
soren | Xenith: Cool! | 22:51 |
*** zul has quit IRC | 22:52 | |
nati | We are discussing about nova IPv6 demo. For example, 500 instance have public IP address,and run p2p application. | 22:53 |
soren | One thing about ipv6 that I keep hearing that makes absolutely no sense to me is that it allegedly doesn't support NAT. | 22:53 |
soren | I don't see how that even makes sense. | 22:53 |
creiht | NAT is so ipv4 | 22:54 |
pquerna | mm? i mean, you *could* NAT in ipv6, but the only reason would be to 'firewall' things | 22:54 |
*** hggdh has quit IRC | 22:54 | |
nati | We needs NAT because the number of IPv4 is small. In IPv6, we have fuge addresses,so we don't needs NAT in IPv6. | 22:54 |
pquerna | but even then, its prolly best to have a good firewall setup. | 22:54 |
nati | pquerna: Yes :) | 22:54 |
soren | Well, either firewalling or to just pass connections on to another host, transparent to the client. | 22:55 |
nati | NAT is not for security. | 22:55 |
Xenith | soren: NAT66 is what NAT for ipv6 is called | 22:55 |
jk0 | soren: there is no need for NAT | 22:55 |
jk0 | everything is routable | 22:55 |
Xenith | It exists, just a lot of IPv6 professionals hate NAT, so are always trying to talk people out of using it. | 22:56 |
*** Jbain has joined #openstack | 22:56 | |
jk0 | you can still control the "LAN" by setting up rules for your /whatever on your router | 22:56 |
nati | jk0 : Yes. | 22:56 |
jk0 | it's what I do for my /48 | 22:57 |
soren | jk0: I don't necessarily want to route. What if I want to have clients hit a single address, but pass the connection on to another host than the one actually holding that address? | 22:57 |
pquerna | you mean a proxy? | 22:57 |
soren | Yes. | 22:57 |
pquerna | or load balancer really. | 22:57 |
Xenith | soren: In reality, you don't really need NAT on v6, just a stateful firewall. | 22:57 |
soren | Yeah. | 22:57 |
jk0 | yeah, that would be LB | 22:58 |
jk0 | same idea, just no natting | 22:58 |
pquerna | you can still do that | 22:58 |
pquerna | exactly | 22:58 |
soren | So when people say NAT isn't supported, they're full of it? | 22:58 |
soren | That's ok. | 22:58 |
Xenith | Yes and no :) | 22:58 |
soren | Well, it's not, they should shut up, but that at least makes sense. | 22:58 |
nati | We can also set one IPv6 address for multiple host. http://ezinearticles.com/?Ipv6-Anycast-Addresses&id=3755025 | 22:58 |
jk0 | I'm not sure what they mean by referring to anything related to NAT with IPv6 | 22:58 |
Xenith | Like regular NAT, NAT66 is just an extension, its not built into the protocol. | 22:59 |
soren | For instance, ip6tables doesn't offer me neither DNAT nor SNAT. | 22:59 |
soren | What's up with that? | 22:59 |
jk0 | it's not needed | 22:59 |
jk0 | you don't need to translate any ports since each node has their own routable IP | 23:00 |
*** DubLo7 has joined #openstack | 23:00 | |
jk0 | ip6tables could be used for allowing certain traffic on certain ports to IPs or block behind your router | 23:00 |
Xenith | soren: NAT66 is technically an experimental proposal, so no one really implements it. | 23:00 |
soren | Right now, I often use DNAT if I move a service from one box to another. Until DNS catches up, I use NAT to get the connections to where they need to be. | 23:01 |
Xenith | You can just using proxying for that. | 23:01 |
uvirtbot | New bug: #702651 in nova "libvirt_conn chmods instance dirs 777" [Undecided,New] https://launchpad.net/bugs/702651 | 23:01 |
soren | Yes. | 23:01 |
soren | Proxying is one application for NAT. | 23:01 |
soren | I'd like to have my kernel do it. | 23:01 |
jk0 | it's a completely different mindset, just takes some time playing with/getting used to | 23:01 |
soren | Instead of expensive userspace things. | 23:01 |
*** lvaughn_ has quit IRC | 23:01 | |
jk0 | IPv6 is what IPv4 intended to be | 23:02 |
Xenith | What jk0 said. Don't think of IPv6 in IPv4 terms. | 23:02 |
Xenith | They don't map one-to-one. | 23:02 |
*** lvaughn has joined #openstack | 23:02 | |
soren | So how do I get started? | 23:03 |
*** Tushar has quit IRC | 23:03 | |
jk0 | head to tunnelbroker.net and get a /64 or /48 | 23:03 |
soren | I already have an account there, I think. | 23:04 |
jk0 | then setup a tunnel on your router/linux box, and install radvd | 23:04 |
soren | It's just not much use to me if I'm going to use it ALL WRONG. | 23:04 |
jk0 | once your tunnel is up, radvd will allow all of your clients on the LAN to automatically pick up an address | 23:04 |
jk0 | and it will "just work" | 23:05 |
jk0 | but I suggest using ip6tables on your tunnel box to secure things | 23:05 |
soren | I've heard that before :) | 23:05 |
Xenith | soren: Just start with getting outbound ipv6 connectivity working on your local network. Build up, no need to jump right into the crazy v6 forwarding stuff :) | 23:05 |
* soren takes a quick break | 23:06 | |
jk0 | it's actually really fun to play with | 23:06 |
nati | jk0: Yes. I like v6. I think design of v6 is cool. :) | 23:07 |
jk0 | it's amazing. I can't wait unil everyone is forced to use it :) | 23:07 |
nati | jk0: We can use v6 on next nova :) | 23:09 |
jk0 | :D | 23:09 |
soren | I think I already have a tunnelbroker.net account. | 23:11 |
jk0 | log in and check it out, they give you copy/paste directions for setting up the tunnel | 23:12 |
jk0 | it's just a matter of assigning a client /64 to an interface | 23:12 |
soren | Oh, yeah, this looks familiar. | 23:13 |
Xenith | jk0: Well, if you're terminating the tunnel on your workstation instead of a router, you don't need to do that. | 23:13 |
jk0 | once that is done, you'll want to assign an IP on your own /64 to eth0 (or whatever is on your LAN) | 23:13 |
jk0 | ah, I assumed he was setting it up on the LAN | 23:14 |
*** rlucio has joined #openstack | 23:14 | |
soren | I somehow doubt my router groks ipv6. | 23:14 |
Xenith | What kind of router? | 23:14 |
soren | Netgear. | 23:14 |
soren | VVG2000. | 23:14 |
jk0 | you don't have to set it up on your router if you have a linux box | 23:14 |
jk0 | radvd will take care of that for you | 23:15 |
soren | I have one of those. | 23:15 |
soren | Ok, so I've run the commands tunnelbroker told me to. | 23:15 |
soren | How to test if it works? | 23:15 |
jk0 | ping6 ipv6.google.com | 23:15 |
soren | What to ping? | 23:15 |
soren | No luck. | 23:16 |
vishy | soren: pushed code and comments | 23:16 |
nati | soren: If you run ipv6 branch, it installs radvd. | 23:16 |
soren | vishy: \o/ | 23:16 |
Xenith | soren: What's the output say? | 23:16 |
jk0 | oh, I know why | 23:16 |
jk0 | you probably need to update your client IPv4 address in tunnelbroker | 23:17 |
jk0 | http://tunnelbroker.net/ipv4_update.php?tunnel_id=<tunnel id> | 23:17 |
Xenith | a | 23:17 |
*** adiantum has quit IRC | 23:17 | |
*** fcarsten has joined #openstack | 23:18 | |
Xenith | Actually, my question is, which client ipv4 address did he use in the commands to create the tunnel? | 23:18 |
*** Tushar has joined #openstack | 23:18 | |
jk0 | no need to specify that in linux | 23:18 |
Xenith | Most people don't notice that if they are behind a NAT devise (like soren is) they need to use their dhcp-given internal IP instead of the public IP. | 23:19 |
jk0 | hm, I dunno about that.. I use my public IP | 23:19 |
jk0 | you might be right though | 23:19 |
fcarsten | Swift question: I'm looking for some examples of swift usage in production environments. Is Rackspace using swift in its own production environment? Any other big users? | 23:20 |
creiht | fcarsten: Swift is what currently backs Rackspace's cloud files | 23:21 |
creiht | NASA is using it | 23:21 |
creiht | cloudscaling is setting it up for some customers | 23:21 |
soren | Xenith: My internal IP? How do the packets find their way back? | 23:21 |
jk0 | I would try on your public IP first | 23:21 |
Xenith | soren: That's what NAT is doing. | 23:22 |
Xenith | See why NAT is confusing? :) | 23:22 |
nati | I should go. Bye :) | 23:22 |
*** nati has quit IRC | 23:22 | |
soren | Xenith: The ipv4 endpoint I tell tunnelbroker about should be... what? My external IP? | 23:23 |
Xenith | soren: Your NAT device basically sees the tunnel traffic as any other IP traffic, and NATs accordingly. The reason you need to use the private IP when creating the tunnel is because your local system needs to know which IP/interface to attach the tunnel to. | 23:23 |
Xenith | soren: The one you tell tunnelbroker should be the public IP. | 23:23 |
soren | Xenith: Yeah, that makes sense. | 23:23 |
soren | Xenith: Gotcha. | 23:23 |
fcarsten | creiht: thanks. Do you know if Rackspace (and NASA) use additional back-up methods or soley rely on swift replication for redundancy / disaster recovery? | 23:23 |
*** adiantum has joined #openstack | 23:23 | |
creiht | fcarsten: We rely on the swift replication/redundancy | 23:24 |
* jk0 uses his Cisco 871 for this | 23:24 | |
creiht | fcarsten: I'm on the cloudfiles team :) | 23:24 |
*** pvo is now known as pvo_away | 23:24 | |
jk0 | but I did it on Linux first for testing | 23:24 |
*** pvo_away is now known as pvo | 23:25 | |
*** pvo is now known as pvo_away | 23:25 | |
jk0 | it works surprisingly well. even my wife Vista laptop picked up an address | 23:25 |
jk0 | *wife's | 23:25 |
Xenith | I just replaced my cisco 2651 with a juniper srx100. Works a lot nicer. | 23:26 |
jk0 | haven't had the pleasure of using Juniper routers | 23:26 |
Xenith | Though for some reason that I haven't had time to debug slaac doesn't work on it. | 23:26 |
Xenith | Otherwise ipv6 works great. (Just gotta give systems static IPs) | 23:26 |
jk0 | I've played with their proxy appliances years ago | 23:26 |
jk0 | ah | 23:26 |
jk0 | well that's not too bad | 23:27 |
*** aliguori has quit IRC | 23:28 | |
fcarsten | creiht: thanks. Did you do any risk analysis for this? And if yes, is it available (methodology description and results)? | 23:28 |
*** jimbaker has quit IRC | 23:29 | |
creiht | fcarsten: does crossing fingers count? :) | 23:29 |
creiht | fcarsten: there wasn't a formal risk analysis done | 23:30 |
fcarsten | creiht: :-) | 23:30 |
fcarsten | creiht: probably not good enough for our level of paranoia. Do you know if NASA did any risk analysis? | 23:31 |
creiht | not sure | 23:31 |
creiht | devcamcar: -^ ? | 23:31 |
creiht | fcarsten: short story is, we store 3 replicas of every object | 23:32 |
creiht | each replica is stored in a different isolation zone | 23:32 |
creiht | the isolation zone can be a server, a rack, by power source, etc. | 23:32 |
creiht | which is a tool to allow you to remove the risk of failures prevent the availability of data | 23:33 |
vishy | fcarsten: we don't have swift in production yet | 23:34 |
vishy | fcarsten: so no formal risk analysis yet | 23:34 |
fcarsten | vishy: thanks | 23:34 |
creiht | fcarsten: is there any specific information about how this works that would help? | 23:35 |
*** Lcfseth has joined #openstack | 23:35 | |
creiht | I realized that I could talk quite a while about the things swift does to keep data available | 23:36 |
*** gondoi has quit IRC | 23:36 | |
fcarsten | creiht: The context for my question: | 23:36 |
jk0 | soren: any luck? | 23:36 |
fcarsten | creiht: I've been playing around with swift a bit and it seems to work well enough for us (CSIRO) to consider starting using it internally. | 23:39 |
*** jimbaker has joined #openstack | 23:41 | |
soren | jk0: Nope. | 23:41 |
jk0 | did you add the router? | 23:41 |
jk0 | *route | 23:41 |
creiht | fcarsten: which part of the risk are you worried about? data loss? | 23:42 |
fcarsten | creiht: The big question for me at the moment is "How much can we trust swift with our data?". Ie. is it stable / scalable enough for production use? The answer seems "Most likely and can be further evaluated by us". Is it save enough to run entrust our data without additional backup? The answer there is more unclear. | 23:43 |
jk0 | soren: oh, also: sysctl -w net.ipv6.conf.all.forwarding=1 | 23:43 |
fcarsten | creiht: The general redundancy approach looks convincing, but the open question is how well it is implemented currently by swift. E.g. how likely is it that there is a bug in swift whivch might cause all copies to get corrupted / deleted? | 23:44 |
notmyname | fcarsten: to brag a little, thousands of rackspace customers trust swift with petabytes of data right now. we run it in production as Cloud Files | 23:44 |
soren | jk0: We think my router won't let me do it. | 23:45 |
jk0 | hm | 23:45 |
jk0 | the tunnel is created on the linux box itself though | 23:45 |
xtoddx | sandywalsh: new wsgirouter branch available | 23:46 |
jk0 | oh, I wonder if perhaps protocol 41 is being filtered | 23:46 |
*** lvaughn_ has joined #openstack | 23:46 | |
creiht | fcarsten: so we start with 3 replicas of every piece of data | 23:46 |
*** dovetaildan has quit IRC | 23:46 | |
creiht | you can actually use more replicas if you like, if the extra storage space isn't an issue | 23:46 |
xtoddx | sandywalsh: it fixes nova-combined and the flags issue, as well as generally refactoring some of the paste stuff into nova.wsgi | 23:47 |
jk0 | soren: look around on your router for passing protocol 41 | 23:47 |
creiht | second, we isolate failure of those 3 replicas as high up the stack as possible | 23:47 |
fcarsten | notmyname: I understand that and it is a very positive indicator. People here unfortunately will want to see something more formal :-( | 23:48 |
creiht | so that we ensure that 2 or more replicas are not on only the same machine, but also in the same cabinent, or network, or power grid | 23:48 |
Xenith | jk0: Yea, I think that's what the issue is, his router isn't passing protocol41 correctly. | 23:48 |
jk0 | soren: otherwise it might be time to switch to a better router ;) | 23:48 |
soren | jk0: That's what I've been doing for the past 10 minutes :) | 23:48 |
soren | jk0: Can't. | 23:48 |
jk0 | why not use a linux box? | 23:48 |
soren | jk0: It's from my ISP. | 23:48 |
jk0 | ah | 23:48 |
jk0 | well that sucks | 23:48 |
jk0 | I wonder if you could switch it to bridge mode | 23:49 |
creiht | when objects are put into the system, it streams the object to all 3 replicas, and only reports success if at least 2 of the writes were successful | 23:49 |
*** lvaughn has quit IRC | 23:49 | |
creiht | all writes are fsynced to disk before reporting success to ensure they are not in volitile cache | 23:50 |
creiht | replication is push based and is always running on all the storage nodes in the cluster | 23:51 |
fcarsten | creiht: thanks. I think that gives em enough information. for now ;-) | 23:52 |
creiht | hehe | 23:52 |
creiht | ok | 23:52 |
*** sophiap has joined #openstack | 23:52 | |
creiht | fcarsten: if you want to dive deeper, let me know | 23:52 |
fcarsten | creiht: ok thanks. | 23:53 |
*** nati has joined #openstack | 23:53 | |
sirp- | looks like we'll need glance on the Hudson server for xs-snaps to merge | 23:54 |
fcarsten | creiht: one more :-) : Is it currently possible to have 5 zones distributed over 2 data-centers and guarantee that not all copies are in the same datacentre? (I understand 5 zones is the recommended number) | 23:55 |
jk0 | soren: google sure isn't any help when looking up your router model | 23:57 |
Xenith | Seriously. | 23:57 |
soren | Tell me about it. | 23:57 |
jk0 | must be a very "special" model :) | 23:58 |
Xenith | jk0: You might also be interested, but there is a #ipv6 channel that's pretty useful at times. | 23:58 |
creiht | fcarsten: Not yet... There is a blueprint for that though, and is something we will probably be working on soon | 23:58 |
fcarsten | thanks | 23:58 |
jk0 | Xenith: cool, thanks, I'll have to check that out some time | 23:58 |
*** dovetaildan has joined #openstack | 23:58 | |
jk0 | soren: do you have DSL w/PPPoE? | 23:58 |
jk0 | if so I think you're better off flipping the modem to bridge mode | 23:59 |
jk0 | and let linux handle the pppoe | 23:59 |
creiht | fcarsten: We will eventually want that, and NASA needs it as well | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!