Thursday, 2011-11-24

*** pradeep has quit IRC00:00
*** RobertLaptop has left #openstack00:03
*** perestrelka has quit IRC00:03
*** perestrelka has joined #openstack00:04
*** hallyn has quit IRC00:07
*** jdg has quit IRC00:07
*** martine has quit IRC00:09
colikiall: have another issue with your scripts. it seems to me it's script related.00:10
colikiall: vm starting tries to connect to port 80 on 169.254.169.25400:10
*** mattstep has quit IRC00:11
colikiall: iptables using dnat changes destination address to port 8773 on management node (8773 it's nova-api)00:11
colikiall: however the iptables also use SNAT to change the source address to compute nodes real public ip address00:12
*** afm has joined #openstack00:13
colikiall: nova-api on management node receives the request, treats it as received from real public ip address and searches the database for metadata for fixed_ip of real public ip address of compute node instead of vm's fixed ip00:13
colikiall: shouldn't the iptables only use DNAT to "redirect" to nova-api on compute node and not on managment node ? and avoid SNAT as well ?00:15
KiallIRC keeps popping over my movie ;) Damn you!00:16
coliI do apologise then and shut myself up :-)00:16
*** guigui1 has quit IRC00:16
KiallAnyway - nova itself sets all those rules up, so either its something missing/wrong in nova.conf, or a bug in nova.. :)00:17
* Kiall gets back to his film ;)00:17
*** ldlework has quit IRC00:18
*** mattstep has joined #openstack00:20
*** afm1 has joined #openstack00:21
*** ben- has joined #openstack00:22
*** maplebed is now known as Guest8900600:23
*** Guest89006 has quit IRC00:24
*** afm has quit IRC00:24
*** ben- is now known as maplebed00:25
*** maplebed has joined #openstack00:25
*** afm1 has quit IRC00:25
*** cereal_bars has quit IRC00:27
*** rustam has quit IRC00:27
*** rustam has joined #openstack00:27
*** mattstep_ has joined #openstack00:28
*** janpy has joined #openstack00:30
*** mattstep has quit IRC00:30
*** mattstep_ is now known as mattstep00:30
*** rustam has quit IRC00:35
*** rustam has joined #openstack00:35
*** mattstep has quit IRC00:36
*** sandywalsh has quit IRC00:37
*** mattstep has joined #openstack00:37
*** mattstep has quit IRC00:42
*** adjohn has joined #openstack00:44
*** adjohn has quit IRC00:44
*** mattstep has joined #openstack00:49
*** mattstep has quit IRC00:50
*** deshantm_laptop has joined #openstack00:50
*** theocjr has joined #openstack00:50
*** rnorwood has quit IRC00:51
*** livemoon has joined #openstack00:54
*** MarkAtwood has quit IRC00:55
*** quake has joined #openstack00:57
*** quake has quit IRC01:01
*** neotrino has quit IRC01:05
*** nati2_ has joined #openstack01:08
*** stanchan has quit IRC01:19
*** rustam has quit IRC01:22
*** nati2 has quit IRC01:22
*** swill has joined #openstack01:22
*** theocjr has quit IRC01:22
livemoonmorninf01:22
*** cmasseraf has quit IRC01:26
*** cmasseraf has joined #openstack01:26
*** dolphm has joined #openstack01:34
*** nati2 has joined #openstack01:34
uvirtbotNew bug: #894218 in nova "the instance'ip  lease time in DHCPflat mode" [Undecided,New] https://launchpad.net/bugs/89421801:35
*** pixelbeat has quit IRC01:36
*** nati2_ has quit IRC01:37
*** andreas__ has quit IRC01:38
*** debo-os has joined #openstack01:43
livemoonhi, nova-compute not running since of libvirtd-bin, Has anyone meet this issue?01:43
*** maplebed has quit IRC01:45
_rfzlivemoon - the make sure Bios is enabled error?01:46
*** obino has quit IRC01:47
livemoon_rfz: I mean nova-compute can running for some time, maybe one days or two days01:48
*** rods has quit IRC01:48
livemoonthen nova-compute will stop update host status, and it seems nova-compute is not running.01:48
livemoonI need kill nova-compute and restart libvirtd-bin and start nova-compute01:49
_rfzlivemoon, I haven't seen that error01:49
*** pradeep1 has joined #openstack01:50
*** rackerhacker has quit IRC01:50
livemoonoh01:50
*** tsuzuki_ has joined #openstack01:50
*** nati2 has quit IRC01:52
*** sdake has quit IRC01:52
*** nati2 has joined #openstack01:52
*** rackerhacker has joined #openstack01:54
*** debo-os has quit IRC01:54
*** emid has joined #openstack01:55
*** dragondm has joined #openstack01:55
*** rackerhacker has quit IRC01:55
*** rackerhacker has joined #openstack01:55
*** 36DAAU429 has joined #openstack01:55
*** troya has joined #openstack01:55
*** debo-os has joined #openstack01:55
*** troya has quit IRC01:58
*** debo-os has quit IRC02:01
*** vladimir3p has quit IRC02:02
*** nati2_ has joined #openstack02:04
*** nati2 has quit IRC02:06
*** rackerhacker has quit IRC02:06
*** rnorwood has joined #openstack02:07
*** rsampaio has joined #openstack02:08
*** sdake has joined #openstack02:09
*** troya has joined #openstack02:15
*** bengrue has quit IRC02:17
*** jdurgin has quit IRC02:18
*** debo-os has joined #openstack02:20
troyahi all02:21
*** debo-os has quit IRC02:25
*** jkyle has quit IRC02:34
*** mattstep has joined #openstack02:35
*** mwhooker has quit IRC02:35
*** dolphm has quit IRC02:38
*** n8 has joined #openstack02:48
*** n8 is now known as Guest3526902:48
*** emid has quit IRC02:52
*** dolphm has joined #openstack02:53
*** shang has quit IRC02:53
*** osier has joined #openstack03:01
*** nati2_ has quit IRC03:04
*** nati2 has joined #openstack03:05
*** shang has joined #openstack03:10
*** Guest35269 has quit IRC03:14
*** dpippenger has quit IRC03:15
*** obino has joined #openstack03:19
*** troya has quit IRC03:21
*** sdake has quit IRC03:23
*** negronjl has joined #openstack03:23
*** vipul_ has joined #openstack03:25
*** troya has joined #openstack03:26
*** pradeep1 has quit IRC03:29
*** sandywalsh has joined #openstack03:30
*** shang has quit IRC03:33
*** woleium has quit IRC03:33
colikiall: I'm positive that your nova.conf on compute nodes is missing dmz_cidr and ec2_dmz_host has wrong value :-)03:34
colikiall: will explain tomorrow, going to sleep now :-)03:34
troyahi coli03:36
colihi and bye ;-003:36
*** sdake has joined #openstack03:37
*** sandywalsh has quit IRC03:37
troyabye coli03:40
*** rackerhacker has joined #openstack03:44
*** n8 has joined #openstack03:48
*** n8 is now known as Guest939903:48
*** rnorwood has quit IRC03:51
*** rnorwood has joined #openstack03:53
*** MarkAtwood has joined #openstack03:53
*** rackerhacker has quit IRC03:56
*** map_nw_ has joined #openstack03:56
*** deshantm_laptop has quit IRC03:58
*** map_nw has quit IRC03:58
*** pradeep1 has joined #openstack04:01
*** woleium has joined #openstack04:02
*** cmasseraf has quit IRC04:02
*** deshantm_laptop has joined #openstack04:02
*** deshantm_laptop has quit IRC04:03
*** MarkAtwood has quit IRC04:15
*** debo-os has joined #openstack04:17
*** nati2_ has joined #openstack04:18
*** nati2 has quit IRC04:20
*** koolhead17 has quit IRC04:26
*** tsuzuki_ has quit IRC04:38
*** DavorC has joined #openstack04:39
*** nati2_ has quit IRC04:40
*** nati2 has joined #openstack04:40
*** nati2 has quit IRC04:40
*** nati2 has joined #openstack04:40
*** nati2 has quit IRC04:41
*** nati2 has joined #openstack04:41
*** rsampaio has quit IRC04:43
*** MarkAtwood has joined #openstack04:45
*** rnorwood has quit IRC04:46
*** abecc has quit IRC04:51
*** rnorwood has joined #openstack04:52
*** hadrian has quit IRC04:59
*** vipul_ has quit IRC04:59
*** supriya has joined #openstack05:00
*** DavorC has quit IRC05:01
*** rackerhacker has joined #openstack05:11
*** mjfork has quit IRC05:13
*** rackerhacker has quit IRC05:22
*** debo-os has quit IRC05:25
*** cp16net has quit IRC05:27
*** YSPark has joined #openstack05:35
*** koolhead17 has joined #openstack05:36
YSParkWhen the VM image is delivered by Glance, Is this image located in Local Server?05:37
*** nerens has joined #openstack05:37
*** cp16net has joined #openstack05:37
YSParkIs this copied to local server?05:37
YSPark??05:38
*** YSPark_ has joined #openstack05:39
*** pradeep1 has quit IRC05:41
*** cp16net_ has joined #openstack05:43
*** YSPark has quit IRC05:43
*** cp16net has quit IRC05:45
*** cp16net_ is now known as cp16net05:45
*** odyi has quit IRC05:51
*** odyi has joined #openstack05:51
*** odyi has joined #openstack05:51
*** shang has joined #openstack05:55
uvirtbotNew bug: #843066 in keystone "Unable to auth against nova with keystone enabled novaclient ..." [High,Confirmed] https://launchpad.net/bugs/84306605:55
*** localhost has quit IRC06:00
*** localhost has joined #openstack06:01
*** juddm has quit IRC06:01
*** juddm has joined #openstack06:01
*** cp16net has quit IRC06:05
*** jmckenty has joined #openstack06:12
*** winston-d has quit IRC06:13
*** hugokuo has joined #openstack06:15
*** HugoKuo__ has quit IRC06:18
*** Guest9399 has quit IRC06:29
*** n8 has joined #openstack06:30
*** n8 is now known as Guest4467506:30
*** Guest44675 has quit IRC06:34
*** arBmind has joined #openstack06:38
*** debo-os has joined #openstack06:41
*** rnorwood has quit IRC06:42
*** nati2_ has joined #openstack06:42
*** nati2 has quit IRC06:45
*** miclorb_ has quit IRC06:47
*** dolphm has quit IRC06:47
*** n8 has joined #openstack06:56
*** n8 is now known as Guest4817606:56
*** pradeep1 has joined #openstack06:57
*** guigui1 has joined #openstack07:01
*** arBmind|2 has joined #openstack07:07
*** arBmind has quit IRC07:07
*** kaigan_ has joined #openstack07:07
*** Guest48176 has quit IRC07:08
*** nati2_ has quit IRC07:12
*** mindpixel has joined #openstack07:15
*** mikhail has joined #openstack07:15
*** TheOsprey has joined #openstack07:20
*** koolhead17 has quit IRC07:21
*** pradeep1 has quit IRC07:25
*** jkyle has joined #openstack07:26
*** jmckenty has quit IRC07:30
*** debo-os has quit IRC07:31
*** dolphm has joined #openstack07:36
*** negronjl has quit IRC07:39
*** pradeep1 has joined #openstack07:40
*** jkyle has quit IRC07:43
*** foexle has quit IRC07:45
*** dachary has joined #openstack07:46
*** dolphm has quit IRC07:50
*** woleium has quit IRC07:51
uvirtbotNew bug: #843046 in keystone "Revocation of tokens" [Wishlist,Confirmed] https://launchpad.net/bugs/84304607:52
*** arBmind|2 has quit IRC07:55
uvirtbotNew bug: #843064 in keystone "Nova integration docs cite bogus 'ln' command ..." [Medium,Confirmed] https://launchpad.net/bugs/84306407:56
*** dachary has quit IRC07:56
*** bush has joined #openstack07:56
uvirtbotNew bug: #843053 in keystone "Packaging recipes" [Low,Confirmed] https://launchpad.net/bugs/84305307:57
*** reidrac has joined #openstack07:57
bushHi, I try to use shell script to build complete OpenStack development environments from : http://devstack.org/  It failes when running /opt/stack/nova/bin/nova-manage db sync07:58
bushnova.exception.ClassNotFound: Class Client could not be found: cannot import name deploy07:59
bushAny suggestions/07:59
*** halfss has joined #openstack08:01
*** troya has quit IRC08:02
uvirtbotNew bug: #843057 in keystone "AdminURL should not be returned on ServiceAPI (dup-of: 854104)" [High,Confirmed] https://launchpad.net/bugs/84305708:12
*** dachary has joined #openstack08:18
*** foexle has joined #openstack08:18
*** redconnection has quit IRC08:21
*** shaon has joined #openstack08:22
*** Razique has joined #openstack08:22
*** mikhail has quit IRC08:24
*** rustam has joined #openstack08:25
lzyeval08:26
foexlehiho08:26
lzyevalbush: did you install all dependencies? http://wiki.openstack.org/InstallFromSourc08:28
*** popux has joined #openstack08:30
*** sticky has quit IRC08:39
*** pradeep1 has quit IRC08:39
*** sticky has joined #openstack08:39
*** irahgel has joined #openstack08:41
*** Guest73472 is now known as mu574n908:41
*** nacx has joined #openstack08:41
*** map_nw_ has quit IRC08:42
*** mu574n9 is now known as Guest1858808:42
Raziquehey foexle08:42
Raziquewhat's up my friend ? :)08:43
livemoonhi, Razique08:47
*** map_nw has joined #openstack08:47
foexleRazique: heyho razique :) i'm fine ... verry tired today but ok ;) ... and you ?08:48
*** koolhead11 has joined #openstack08:49
*** shaon has quit IRC08:50
*** Guest18588 is now known as mu574n908:51
*** mu574n9 has quit IRC08:51
*** mu574n9 has joined #openstack08:51
*** koolhead11 has joined #openstack08:53
koolhead11hi all08:53
*** guigui1 has quit IRC08:54
*** rustam has quit IRC08:55
livemoonhi, kool08:55
foexlehey koolhead11 & livemoon :)08:56
koolhead11hi livemoon foexle08:56
*** pradeep1 has joined #openstack08:57
*** cmu has joined #openstack08:58
*** jedi4ever has quit IRC08:59
*** adiantum has joined #openstack09:03
livemoonI have finished my scripts install openstack09:04
foexlegreat :)09:04
koolhead11livemoon: cool09:05
koolhead11livemoon: and does it uses everything from git repo09:05
*** uksysadmin has joined #openstack09:10
*** dobber has joined #openstack09:11
*** guigui1 has joined #openstack09:12
*** cmu has left #openstack09:13
*** mgoldmann has joined #openstack09:15
livemoonyes, according to devstack scripts09:15
koolhead11livemoon: so what exactly your script changes, keystone infos for Database09:16
koolhead11hola uksysadmin09:16
foexleany knows when the next stable version comes out ?09:16
*** pixelbeat has joined #openstack09:17
*** dev_sa has joined #openstack09:17
*** shaon has joined #openstack09:17
*** javiF has joined #openstack09:18
uksysadmin'sup koolhead1109:19
Raziquehey uksysadmin koolhead11 livemoon :)09:19
koolhead11uksysadmin: notthing much09:19
*** popux has quit IRC09:19
* koolhead11 kicks Razique 09:19
koolhead11:D09:19
Raziquehehe09:19
koolhead11Razique: was looking for you once i reached hope for the docs update :D09:19
uksysadminword all09:20
Raziquekoolhead11: tell me09:20
* uksysadmin is going all 80s skater American today09:20
*** Razique has quit IRC09:20
*** Razique has joined #openstack09:20
*** foexle has quit IRC09:22
*** foexle has joined #openstack09:24
*** dev_sa has quit IRC09:25
*** MarkAtwood has quit IRC09:31
*** MarkAtwood has joined #openstack09:34
*** dev_sa has joined #openstack09:35
*** mrevell has joined #openstack09:36
*** mrevell has quit IRC09:38
*** mrevell has joined #openstack09:38
*** pradeep1 has quit IRC09:39
*** javiF has quit IRC09:39
*** shaon has quit IRC09:41
*** rustam has joined #openstack09:41
*** katkee has joined #openstack09:42
*** TheOsprey has quit IRC09:43
*** alexn6 has joined #openstack09:45
*** TheOsprey has joined #openstack09:46
*** katkee has quit IRC09:47
*** pradeep has joined #openstack09:54
*** darraghb has joined #openstack09:56
*** dysinger has joined #openstack09:58
*** troya has joined #openstack10:01
*** shaon has joined #openstack10:03
* Razique slaps ChanServ around a bit with a large bass10:05
alexn6Hi! can smbody say - is it ok to for example ssh from running instance back to its public ip? (flatDhcp mode, 2 nics). One can ssh back to its private adrress, and look like all correct with iptables snat, but it still isn`t possible.10:06
*** javiF has joined #openstack10:08
*** livemoon has left #openstack10:10
*** livemoon has joined #openstack10:11
*** cloudgeek has joined #openstack10:15
*** jantje_ has quit IRC10:23
*** jantje has joined #openstack10:23
*** livemoon has left #openstack10:26
*** littleidea has joined #openstack10:33
*** ccorrigan has joined #openstack10:37
*** supriya has quit IRC10:37
foexlealexn6: hey, i'm sorry i don't understand what you mean :>, do you try to get a ssh login to another instance with the backnet ip's ?10:37
*** corrigac has joined #openstack10:40
*** ccorrigan has quit IRC10:42
lionelhello. Is there any documentation/tutorial on using multiple nic in nova?10:44
*** supriya has joined #openstack10:44
alexn6foexle: I want to ssh from instance back to itself but on public IP (I add IP by euca-assosiate), it`s ok when sshing back on its private address10:45
*** tryggvil has quit IRC10:47
foexlealexn6: why do you do that ? °° ssh to localhost ? .... I'm not sure what your use case is .... so i can't give a correctly answer .... but yes you can ssh login to the same instance10:51
*** mrevell has quit IRC10:51
*** dev_sa has quit IRC10:52
*** mrevell has joined #openstack10:53
*** dev_sa has joined #openstack10:56
alexn6foexle: in my case I cannot do so and don`t understand why. We have service that access resources on VM by public IP.10:58
foexleso you cant login via public ip to your vm ?11:00
foexleor only from backnet to public ip?11:01
alexn6foexle: what exactly I want - nova-network on v.v.v.1,  VM on private v.v.v.2 has real IP r.r.r.r. I go to VM by ssh on real or private IP. and then form VM go to it again via real IP(via private it`s ok)11:01
foexleah yeah .... you need a extra nic in each vm11:01
foexlenormally you have a default route on your host server to access public ips11:02
*** lzyeval has quit IRC11:02
foexlecan you use domains instead of ips?11:03
*** ahasenack has joined #openstack11:05
*** ollie1 has joined #openstack11:09
*** JesperA has joined #openstack11:09
alexn6why them better?11:11
alexn6possibly not11:11
foexleyou can simply use your etc/hosts file11:12
alexn6for what extra nic? are you sure?11:12
*** katkee has joined #openstack11:12
foexlealexn6: no not sure :) ... this use case havn't heared before :)11:13
alexn6foexle: can you check on your installation?11:15
uvirtbotNew bug: #894333 in nova "Data Loss in VM if the vm is created from snapshot(seen this happening often)" [Undecided,New] https://launchpad.net/bugs/89433311:15
foexlenot possible atm :) .... i move the complete system to production hw .... so i have atm a running cloud11:16
*** brainsteww has joined #openstack11:17
alexn6and? you just need some running linux VM in cloud11:17
foexlei dont have 111:17
*** PotHix has joined #openstack11:20
*** mnour has joined #openstack11:27
*** dysinger has quit IRC11:31
*** uksysadmin has quit IRC11:31
*** bush has quit IRC11:32
*** katkee has quit IRC11:36
*** dysinger has joined #openstack11:39
*** foexle has quit IRC11:41
*** foexle has joined #openstack11:41
*** katkee has joined #openstack11:46
*** guigui1 has quit IRC11:48
*** Razique has quit IRC11:48
*** halfss has quit IRC11:48
uvirtbotNew bug: #894323 in nova "Nova API exposes hostId to non-admin" [Undecided,New] https://launchpad.net/bugs/89432311:56
*** livemoon has joined #openstack11:56
*** yshh has joined #openstack12:02
*** HugoKuo_ has joined #openstack12:12
zykes-anyone have knowhows on products for frontends for swift?12:14
*** hugokuo has quit IRC12:16
*** rsampaio has joined #openstack12:23
*** MarkAtwood has quit IRC12:24
*** cereal_bars has joined #openstack12:24
*** abecc has joined #openstack12:25
zykes-notmyname: here ?12:31
*** littleidea has quit IRC12:36
*** JStoker has quit IRC12:37
*** rsampaio has quit IRC12:38
*** zz_bonzay is now known as bonzay12:40
*** JStoker has joined #openstack12:40
*** supriya has quit IRC12:42
*** bonzay is now known as zz_bonzay12:42
zykes-anyone here doing stuff with swift ?12:43
reidracyeep12:44
zykes-what servers are you using ?12:44
*** hugokuo has joined #openstack12:45
reidracservers? do you mean hardware?12:48
zykes-correct12:48
*** _rfz has quit IRC12:49
reidrac4U 2 x Quad Xeon with 24 disks12:50
reidracthat's for each storage node12:50
zykes-what's the price for one of those?12:51
reidracI don't have that information12:51
zykes-ah, doh ;)12:51
zykes-you know which server model ?12:51
reidracyou can look for "4U 2 x Quad Xeon with 24 disks" in google12:52
zykes-dells ?12:52
reidracI'm not in ops, I don't deploy the hw :)12:53
reidracnot sure, we work with other providers12:53
*** Razique has joined #openstack12:54
jasonahmm12:54
jasonaanyone done a RFQ for openstack supported storage for swift ?12:55
*** Razique has quit IRC12:55
*** Razique has joined #openstack12:55
zykes-rfq ?12:55
jasonarequest for quote.. also around request for proposal or request for information12:55
jasonai.e, if you wanted to go buy something and wanted to give vendors a list of things they had to do12:56
zykes-i wonder what hardware i would need12:56
jasonathere's a few suggestions in the openstack doco but wondering if anyone has gone through this recently.12:56
zykes-firstly for a starter setup12:56
zykes-:p12:56
reidraczykes-: you can have swift all in one machine12:56
zykes-isn't that a bit risky ?12:56
reidracit's a test setup12:57
reidrachow many zones do you want to implement?12:57
zykes-firstly 1 i guess12:57
zykes-i mean 1 server12:57
zykes-to see that it works12:57
reidracthen is all in one12:57
reidracyou need at least 3 zones if you want to use 3 replicas12:57
zykes-yeah, that means 3 groups of drives12:58
*** brainsteww has quit IRC12:58
zykes-can't be done with 2 zones ? ;p12:58
reidracyou said: zykes-: isn't that a bit risky ?12:58
reidrac:)12:58
zykes-heh12:58
reidrachave you read the docs?12:58
zykes-yeah12:59
reidracI see12:59
guaquayou can run with 2 replicas, 2 servers12:59
hugokuozykes , 1 pm , 5 disks12:59
guaquathe problem is, if 1 server dies, it's read-only12:59
hugokuozykes , two for system using RAID12:59
guaquaso it's basically down then12:59
hugokuothree disks for 3 zones12:59
guaquathe data is intact, but it cannot operate12:59
hugokuoif that's deployment is only for personal using. it would be fine13:00
Raziquehey hugokuo13:00
zykes-but say 3*1 tb in 3 zones13:00
hugokuoRazique , bonjour13:00
zykes-then you only have 1 tb of capacity ?13:00
Razique:)13:00
hugokuozykes , yup13:00
livemoonhi,all13:01
Raziquehey zykes-13:01
Raziquelivemoon: =-13:01
Razique:)13:01
zykes-how much memory does one need for say a server that has 12*2tb drives and 1 quad core ?13:01
Raziquezykes-: for which usage ?13:02
zykes-swift13:02
Raziqueah ok13:02
hugokuozykes , for which swift worker ?13:02
Raziquewouldn't misguide :)13:02
zykes-isn't a quad core enough then ?13:02
zykes-hugokuo: storage node13:03
hugokuozykes , who will you use it ?13:03
hugokuointernal using or ?13:03
zykes-hugokuo: web customers and archive system13:03
zykes-but not "heavy" public usage13:03
hugokuoI think it would be enough for a "storage node"13:04
zykes-how much hugokuo memory13:04
hugokuoaround 8GB ram …. but I did not test it though13:04
hugokuoon of my deployment of swift with 6 storage node (desktop 4 core 16 G)13:06
*** guigui has joined #openstack13:06
hugokuothe loading of each storage is not that high13:06
hugokuomy memory usage never over 1GB13:06
hugokuoyou might interesting about the HW spec of backspace recommendation13:07
troyahi All13:07
hugokuozykes , http://www.referencearchitecture.org/13:09
*** mjfork has joined #openstack13:14
*** uksysadmin has joined #openstack13:15
*** hugokuo has quit IRC13:16
*** deshantm_laptop has joined #openstack13:25
*** praefect has quit IRC13:26
*** praefect has joined #openstack13:26
*** dev_sa has quit IRC13:26
zykes-hmmm, i wonder on how many hours one could count for a "basic" cluster13:26
zykes-with 3 zones13:26
jasonahmm13:30
jasonaso been reading the commentary13:30
jasonaand back to original question13:30
jasonaanyone had to buy this stuff and written or got access to a rfp/rfq/rfi ? :)13:30
jasonai'm particularly interested in how you asked vendors to supply storage around what nova needs, vs what swift needs.13:31
jasonahmm, quiet. :)13:33
*** osier has quit IRC13:35
JesperAHmm, in http://www.referencearchitecture.org/ it suggests a Dell C2100 as a storage node, why not use a Dell R510/R515 for that? Much cheaper13:35
zykes-i'm looking at a single quad core 16 disk box with 8 gig ram13:37
*** shaon has quit IRC13:37
zykes-for storage nodes now13:37
JesperAzykes- a supermicro box?13:38
reidracwe're using 16GB in our storage nodes, but it looks like they're using around 4GB13:38
zykes-JesperA: yes13:39
zykes-how come ?13:39
zykes-JesperA: why a R510/515 ?13:39
JesperAzykes- because i am thinking about that too13:39
reidracit would be really useful knowing some figures :)13:39
zykes-reidrac: that's what i'm investigating13:39
zykes-currently i'm looking at 12-24 disk nodes13:39
zykes-single processes13:39
zykes-processor13:39
JesperAzykes- will you be using a separate proxy server?13:41
jasonathanks jespter. the reference arch is useful13:43
zykes-JesperA: unsure yet13:46
*** abecc has quit IRC13:46
jasonajesper: the reference arch covers nova and bits of swift13:54
*** hadrian has joined #openstack13:54
jasonabut i am not quite seeing the nova storage bits exactly. hmm13:54
JesperAIn the example it uses a Dell MD3200i, but it all depends how much storage is needed13:55
zykes-for what JesperA swift?13:56
JesperAnope, nova13:56
jasonahmm.14:02
cloudgeekHi all14:04
*** pradeep has quit IRC14:05
*** katkee has quit IRC14:08
foexleRazique: do you know when the next stable version planed is ?14:08
Raziquefoexle: yah14:08
foexlejan 2012 ? ;)14:09
RaziqueApril the 5th14:09
foexleoh ok14:09
Raziqueessex 2012.114:09
Razique:)14:09
foexleah k :>14:10
foexleapril ... with the new lts version ^^14:11
zykes-next stable release no is 2012.3 Razique ?14:11
zykes-2012.1 is already out14:11
zykes-i thought14:11
zykes-or how is that versioning stuff again14:11
foexlestable = 2011.3 (diablo)14:11
*** corrigac has quit IRC14:12
Raziquezykes-: I just checked  OpenStack 2012.12012-04-05 not yet released14:12
zykes-ok14:12
*** chemikadze has quit IRC14:13
livemoonRazique: have you used essxe-1?14:15
Raziquelivemoon: not at all, I'm still using diablo stable =d14:15
Raziquenever tried trunk14:15
Raziquewhat about you livemoon ?14:15
livemoonso do I14:15
livemoonbye14:16
*** debo-os has joined #openstack14:16
livemoonI am ready to reader my kindle14:16
*** redconnection has joined #openstack14:17
*** livemoon has left #openstack14:19
*** rods has joined #openstack14:19
*** dev_sa has joined #openstack14:21
*** _rfz has joined #openstack14:24
zykes-swift uses JBOD no ?14:24
*** katkee has joined #openstack14:24
*** dubenstein has joined #openstack14:26
dubensteinhi #openstack14:26
zykes-JesperA: what kind of disks are you on ?14:26
Glaceezykes: it is recommended not to use RAID14:26
Glaceefor objects/accounts/containers14:26
zykes-Glacee: i know14:26
JesperAzykes- i have not decided, i am also in the planning stage14:27
dubenstein«glance add» is taking too long, bursting machine load to maximum, have someone experienced an issue like that ?14:27
*** debo-os has quit IRC14:27
jasonahmm14:28
*** debo-os has joined #openstack14:28
*** deshantm_laptop has quit IRC14:28
jasonajesper: if you are planning, do you have any docs yet ?14:28
jasonai just finished v0.1 of a procurement spec, really looking to see what other people are specifying also :)14:29
Glaceezykes: yeah.. I just went up the channel to read previous conversation :014:29
zykes-hmmmm14:29
zykes-i don't think this can be right14:30
JesperAjasona but you are looking at a Nova cluster, right?14:30
Glaceehonestly.. the best bang for bucks right now that I found.. is using 36disks box in 4U14:31
jasonajesper: no, the whole shebang14:31
jasonai need to specify nova compute servers (for lots of VMs), nova storage, swift storage, glance servers and anything else i need14:31
zykes-Glacee: what controllers you doing ?14:31
Glacee3ware 24ports controllers14:32
Glaceenot using expanders14:32
zykes-ah14:32
*** rsampaio has joined #openstack14:32
zykes-Glacee: i'm doing lsi jbod controllers (8 port) in a 16 slots bo14:32
zykes-box14:32
jasonaalso separately specifying big data storage to run alongside swift.14:32
JesperAGlacee http://farm7.staticflickr.com/6062/6074472208_aaeafd80bd_b.jpg14:32
zykes-jasona: care to share the spec or confidential?14:32
JesperAjasona ok i think you are further into the planning stage than i am14:32
jasonazykes: willing to share with community down the track but can't share it for a few days. i.e it does have to go to vendors first14:33
zykes-ah14:33
Glaceejespera: yeah backblaze.. thats one thing we consider but14:33
jasonaequipment order planned within a few weeks :)14:33
Glaceebe careful for CPU/RAM14:33
zykes-i ended up with about14:33
zykes-10k $ for 16*2tb14:33
zykes-pr storage node14:33
jasonajesper: maybe. i need to buy kit in 2-3 weeks. must have cluster up in next 8 weeks or so.14:34
Glaceeat least just put objects on backblaze.. even then14:34
Glaceethe CPU by default is propably too low14:34
zykes-Glacee: which one ?14:34
Glaceezykes: ok not bad.. the box we have.. is around 16k for 66*3TB14:34
zykes-66*3tb ?14:34
zykes-1 box ?14:34
Glaceesorry 33*3tB14:35
jasonaglaceee: you looked at dells stuff or just found better bang/buck via others ?14:35
zykes-only bad thing is that harddrive prices is shitless expensive atm14:35
Glaceeothers jasonA14:35
zykes-Glacee: you got a spec or ?14:35
jasonaalso, anyone comments on the 4T 2.5" drives shipping next year ?14:35
zykes-4T 2.5 ?14:35
zykes-damned14:35
Glaceehmm not handy... but ask if you want something specific14:35
Glaceejasona: they will propably be too expensive14:36
Glaceejasona: are you buying consume grade drves for your backblaze pod?14:36
Glaceeconsumer*14:36
zykes-Glacee: server model etc14:36
zykes-backblaze ?14:36
jasonaglacee: not building backblaze pods but am expecting to use near consumer drives though14:36
jasonai.e either sata, or preferred, NLSAS.14:36
Glaceeoh ok.. the picture you sent was backblaze :) jasona14:36
JesperAit was me Glacee14:37
jasonadidn't send a pic :)14:37
jasonajesper did!14:37
zykes-what's backblace ?14:37
Glaceeohh ok lol.. jasonA.. Jespera wow14:37
zykes-blaze14:37
Glaceeconfusing names :)14:37
jasonalook at openstoragepod.org or something like that zykes14:37
jasonaglacee. zykes. they're like practically identical also! :)14:37
JesperAzykes- http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/14:37
jasonai would _like_ to have build a BBP as part of this project but well, not really enough money in it14:38
zykes-bbp ?14:38
jasonabackblazepod14:38
Glaceethe thing with consume drivers.. you need to make sure that you have some anti-vibration mechansiim in place14:38
Glaceeor you will have a lot of failures14:38
jasonaglacee: you mean.. a whole packet of rubber bands with each drive ?? :)14:38
Glaceeentreprise grade drives have anti-vibration features in them14:39
Glaceejasona: yeah :)14:39
Glaceethats one option14:39
zykes-what drives should one use ?14:39
zykes-hitachi ultrastar?14:40
jasonawell, in a world where money is no limit14:40
Glaceedepends on your use case :014:40
jasonaall SSD zykes! :)14:40
zykes-but ultrastar is usable ?14:40
*** rbp has joined #openstack14:40
Glaceezykes: why not14:41
*** praefect_ has joined #openstack14:41
*** deshantm_laptop has joined #openstack14:41
Glaceewer looking to modify backblaze pods.. to get more CPU/RAM and test it with swift14:42
Glaceewell see how it goes :014:42
zykes-hmm14:42
zykes-funny14:42
zykes-7746,363636363636$ for 32 tb14:42
zykes-not bad at all14:42
JesperAGlacee well, its really easy to just buy a more powerfull cpu, so modify?14:42
GlaceeJesperA the board they use have limitation on the CPU you can use if I can remember14:43
Glaceeand its pretty low14:43
zykes-isn't that pretty decent pricing ?14:43
*** praefect has quit IRC14:43
zykes-i wonder if i can throw in a 24-32 slot chassis14:43
Glaceezykes:its alright14:43
zykes-and make it even more powerful within 15K14:44
Glaceedepends of your use case14:44
Glaceedo you really need 15k for object storage?14:44
zykes-Glacee: in $ the cost of each storage node14:44
Glaceeohh within 15k $ sorry lol14:44
zykes-i wanna see if i can double the amount of drives with chassis and controllers etc within 1514:45
coliyou would be suprised that 7.2k rpm sata drives are more then often much faster than 15k sas drives14:45
JesperAyeah you are right Glacee the most powerfull cpu on the motherboard they are using is: http://ark.intel.com/products/48501/Intel-Xeon-Processor-X3480-(8M-Cache-3_06-GHz)14:46
*** rsampaio has quit IRC14:50
JesperAWell, i am not sure about port multipliers, cant squeeze out to much speed from those i guess14:50
Glaceedepends on  your use case.. backblaze pods.. usually get filled for backups and then they do nothing14:50
*** shaon has joined #openstack14:51
JesperAyeah, fits perfect for them, but i have more reads than writes14:51
*** dubenstein has quit IRC14:51
zykes-http://pastebin.com/N8NWus8h14:52
zykes-that's what i'm at atm14:52
*** dubenstein has joined #openstack14:53
zykes-but damned14:54
zykes-i wasn't clear of that my motherboard had 14 connectors14:54
zykes-means i can scale down on that solution to 1*4 port extra14:54
JesperAwhat currency is that?14:54
zykes-NOK14:54
zykes-divide it by 5.5 for $14:54
Glaceeseems like a crazy currency.. 9000$ for a chassis14:55
JesperAok14:55
zykes-it's not glace14:55
zykes-it's in norwegian kroners pre vat14:56
Glaceeoh ok14:56
JesperAyeah, here in Sweden it would cost something like 1300014:56
zykes-1551 $14:56
royhzykes-: hi there :)14:56
zykes-royh: ? i know you or ;p14:57
royhzykes-: nope. but I use the same currency as you :P14:58
*** katkee has quit IRC14:58
*** dev_sa has quit IRC15:00
*** pradeep has joined #openstack15:02
dubenstein«glance add» is taking too long, bursting machine load to maximum, have someone experienced an issue like that ?15:05
*** dev_sa has joined #openstack15:05
dubensteintrying to «glance add»  oneiric-server-cloudimg-amd64.img, it's 1.4G15:06
dubensteinno swift banckend15:07
zykes-anyone here got knowhows on the Dell DX platform ?15:07
*** katkee has joined #openstack15:10
*** kaigan_ has quit IRC15:16
*** guigui has quit IRC15:16
*** dev_sa has quit IRC15:23
*** guigui1 has joined #openstack15:29
*** alekibango has quit IRC15:34
*** alekibango has joined #openstack15:34
*** foexle has quit IRC15:40
*** debo-os has quit IRC15:42
*** wariola has quit IRC15:49
*** debo-os has joined #openstack15:50
*** wariola has joined #openstack15:50
uvirtbotNew bug: #894431 in nova "linux_net ovsinterfacedriver is setting the wrong iface-id" [Undecided,Confirmed] https://launchpad.net/bugs/89443115:51
*** o86 has joined #openstack15:53
*** o86 has left #openstack15:53
*** uksysadmin has quit IRC15:54
*** dragondm has quit IRC15:54
*** 36DAAU429 has quit IRC15:54
troyahi all15:58
*** cloudgeek has quit IRC15:59
troyahi zykes15:59
*** TheOsprey has quit IRC16:01
*** mindpixel has quit IRC16:05
*** reidrac has quit IRC16:07
*** koolhead11 has quit IRC16:13
*** freeflyi1g has quit IRC16:13
*** cloudgeek has joined #openstack16:14
*** freeflying has joined #openstack16:15
*** yshh has quit IRC16:16
colianybody knows what happens if tenant has many instances and he runs out of ip addresses in fixed_ip range assigned to him ?16:19
_rfzcoli, once the ip are all used up, the next VM you try spin up will fail - with an error something like "no more fixed IP's to lease"16:21
*** guigui1 has quit IRC16:23
*** troya has quit IRC16:24
*** shaon has quit IRC16:27
*** jkyle has joined #openstack16:28
*** n8 has joined #openstack16:35
*** n8 is now known as Guest6817316:35
zykes-anyone tell me what a bastion server is ?16:36
*** shaon has joined #openstack16:38
jkylewhen I do a nova-manage floating list I get output like: <hostname> <ip_address>: None16:38
jkyledoes the 'None' mean this ip has not been allocated16:38
*** gerry__ has joined #openstack16:39
*** dobber has quit IRC16:40
*** koolhead17 has joined #openstack16:45
*** TheOsprey has joined #openstack16:47
*** debo-os has quit IRC16:47
*** pradeep has quit IRC16:54
*** debo-os has joined #openstack16:54
*** JesperA is now known as c01416:54
*** chemikadze has joined #openstack16:56
*** c014 has quit IRC16:58
*** Jeppelelle^aw has joined #openstack17:01
*** jkyle has quit IRC17:01
*** JesperA has joined #openstack17:02
*** bryguy has quit IRC17:03
*** cereal_bars has quit IRC17:03
*** bryguy has joined #openstack17:04
*** alexn6 has left #openstack17:06
*** oonersch has joined #openstack17:06
*** jkyle has joined #openstack17:08
*** jkyle has joined #openstack17:09
*** rbp has quit IRC17:09
*** crescendo has quit IRC17:11
*** dysinger has quit IRC17:12
*** woleium has joined #openstack17:15
*** mrevell has quit IRC17:15
*** jkyle has quit IRC17:19
*** tryggvil_ has joined #openstack17:20
_rfzOn a FlatDHCPManager network is possible to be ping the internal Ip of controller and compute nodes?17:20
*** katkee has quit IRC17:23
zykes-anyone here good at supermicro hardware?17:26
*** mnour has quit IRC17:30
*** MarkAtwood has joined #openstack17:32
coli_rfz: yes17:32
*** Razique has quit IRC17:32
colizykes-: what do you require ? we have full room of these17:32
coli_rfz: I revert my answer. what do you mean by "FlatDHCPManager network" ? fixed_ip network ?17:35
*** pixelbeat has quit IRC17:35
*** shaon has quit IRC17:36
*** shaon has joined #openstack17:39
*** jkyle has joined #openstack17:39
*** irahgel has left #openstack17:42
zykes-coli: hardware for swift17:42
vidd-awaycoli, i would assume he means "i have flat networking with dhcp set up and not vlan"17:42
colividd: evening ;-)17:44
colividd: by default nova is set to "vlan" mode for networking, how can I check what I have setup ? (there is no hint i nova.conf as I can see, and I have installed from Kiall scripts)17:45
colizykes-: there are cases 3 or 4U high which hold 28-32 drives we use them.17:45
colizykes: plus 3Ware controlers, new 3750 ones just rock, very fast.17:46
vidd-awayhello coli if you did not specificaslly set up flat networking, then you have vlan17:46
zykes-3wares for jbod ?17:46
colizykes: what do you mean by jbod ? we put everything in one case, which I have mentioned.17:47
* vidd-away has to go drive halfway accross the state for thanksgiving dinner =[17:47
zykes-coli: but you don't raid disks no ? ...17:47
vidd-awayhave fun y'all17:47
Jeppelelle^awcoli do you have any whitepapers on the 3750? Cant seem to find them, not out in public yet?17:48
*** Jeppelelle^aw is now known as JesperA17:48
zykes-i can't even find the controller here17:48
*** pixelbeat has joined #openstack17:49
colijepp: i'm wondering if I have made a mistake with the model, just a sec17:49
coli3ware 9750.... sorry17:50
*** bamedro has joined #openstack17:50
coliwe have connected bunch of kingston hyperx ssd drives, it was blazing fast17:51
zykes-that's a 8 port controller ?17:51
colinever seen anything that quick when it comes to disk arrays17:51
colithere are 16 and 24 port versions17:51
coliwe use two 16 port per case17:52
zykes-coli: with a expander backplane17:52
zykes-can't you run a 2*8087 connectors with 6 gbit for all disks for sata ?17:52
zykes-for 24 disks17:52
colizykes: I haven't seen them physicly, so possible it's with expander17:53
colizykes: we had 6 or 8 kingston hyperx ssd drives and all were connected at 6gbit17:53
colizykes: afair they had to change the cables in the case as they were unreliable and speeds were slow, after changing to new cables from backplane to card (I think foxconn cables were used) it was working fine.17:55
zykes-10 gigabit network coli ?17:55
*** dachary has quit IRC17:55
colizykes: for storage main nodes, yes.17:55
zykes-ok17:57
colizykes: tests with kingston hyperx ssd drives were done in order to saisfy our curiosity ;-)17:57
JesperAcoli what make/model of switches are you using for 10Gbit?17:58
coliasfair cisco 48xx or 49xx, I will check but I think it begins with 4... ;-)17:58
zykes-not force10 ?17:59
*** _rfz has quit IRC18:00
JesperAYeah the hardware recomendation suggest a 49xx: http://www.referencearchitecture.org/hardware-specifications/18:00
zykes-JesperA: doesn't mean something else won't work18:01
zykes-does one need to use 10gigabit for replication net ?18:02
Glaceearent 10gb cisco super expensive?18:03
JesperAwhat cisco gear isnt? :D18:03
zykes-JesperA: that's for aggregate switches18:03
zykes-if you've got lots o racks18:03
zykes-we're gonna start at 3 nodes18:04
zykes-most likely18:04
zykes-so no need for that18:05
*** debo-os has quit IRC18:05
*** deshantm_laptop has quit IRC18:05
coliI can see c49xx and some Brocade switches18:05
*** jkyle has quit IRC18:05
zykes- coli what about force10 ?18:07
zykes-hp18:07
colizykes: I know they exist :-) we don't use them18:07
colizykes: personally I have very bad experience (end of '90s) with hp switches, however as far I'm told they have moved forward a lot ;-)18:08
JesperAcoli, what is the reason behind the recommended hundreds of partitions per drive in swift?18:08
colizykes: if you are starting with 3 nodes why do you worry ?18:08
coliJesperA: no idea. I didn't touch swift yet. still working out how to use nova.18:09
JesperAoh ok18:09
zykes-coli: i ain't18:09
zykes-we're starting with swift if it's going to be at all18:09
*** jkyle_ has joined #openstack18:10
colizykes: just out of curiosity any particular business reason to start with swift ? public or private use ?18:10
coliby public I mean a lot of small users.18:11
zykes-private for customers atm18:11
zykes-ehm, so it becomes "both"18:11
zykes-really18:11
zykes-isn't a server of 24 disks 24 nodes ?18:12
zykes-in swift18:12
zykes-server_ip + disk18:13
zykes-= node18:13
*** redconnection has quit IRC18:14
notmynameJesperA: swift partitions != disk partitions. swift partition == logical keyspace partition used to balance data throughout the cluster18:14
zykes-notmyname: is my question "correct"?18:15
JesperAoh ok that makes much more sence18:15
*** adiantum has quit IRC18:15
notmynamezykes-: sorry didn't see it18:15
notmynamezykes-: 10gb for replicate?18:15
zykes-notmyname: no > $node = $server + $device18:16
Glaceenotmyname: I think you mentioned that swift keeps all the disks operating at all time? is this true or I am mistaken.. and if yes, is it because of the auditors/replicators?18:16
notmynamenormally the way I use "storage node" is as the box running the particular processes (container, account, object). or perhaps, if you deploy this way, the controller server + the JBODs18:17
coliGlacee: what do you mean by "disks operating at all time" ?18:17
notmynamezykes-: I think of storage volume == IP + port + mount point18:17
notmynameGlacee: yes. the disk spin all the time because of auditors, replicators, and also because new data can go to any disk18:18
zykes-notmyname: same thing then anyways18:18
zykes-if you do 1 disk pr mount point ?18:18
Glaceenotmynam: thanks18:19
notmynamezykes-: "mount point" is more generic. eg you could do a RAID 10 volume18:19
*** oubiwann has quit IRC18:19
zykes-yeah18:19
*** nacx has quit IRC18:20
notmynamegotta go18:20
zykes-darn18:20
notmynameI'll be back later :-)18:20
*** rustam has quit IRC18:22
*** darraghb has quit IRC18:22
*** oonersch has quit IRC18:23
zykes-anyone here up for some economics on swift?18:25
JesperA10 billion dollars18:25
JesperA:)18:25
zykes-;p18:26
zykes-if i should calculate the price for gb/month of a swift zone18:26
zykes-what formula would that be ?18:26
guaquazykes-: i was doing the same thing just today18:26
guaquait depends on the replica count, amount of zones18:27
guaquawhat kind of hardware you have18:27
zykes-supermicro18:27
guaquayou have to figure out how much your hardware costs18:28
zykes-that i know18:28
guaquaand after that, how much it costs for you to run it18:28
zykes-i don't have the "run" costs18:29
zykes-atm18:29
zykes-i have the buy cost pr gb pr month18:29
guaquaoperate would be a better term for it18:29
zykes-https://docs.google.com/spreadsheet/ccc?key=0AufFjyusNdg4dGttdmtCY3BHR21aU0ZIcDBwcmoxRlE#gid=018:29
guaquai guess swift-specific is just how many zones and how many replicas18:30
zykes-https://docs.google.com/spreadsheet/ccc?key=0AufFjyusNdg4dGttdmtCY3BHR21aU0ZIcDBwcmoxRlE sorry is the link18:30
zykes-what's the diff on a zone and replica?18:30
guaquaahh, nice18:30
*** lionel has quit IRC18:30
guaquareplica count means how many zones a given file is18:30
guaquaif you have 3 zones and 3 replicas, it's on all of them18:31
zykes-that doesn't count though for the cost of hardware ?18:31
zykes-i'm working on the "small" setup atm18:31
guaquadepends on how you set it up18:31
zykes-care to help?18:31
*** dolphm has joined #openstack18:31
*** lionel has joined #openstack18:32
zykes-i just need to find out the price for a zone pr month18:32
guaquai'm doing something similar myself18:32
*** gerry__ has quit IRC18:32
zykes-can we share a document ?18:32
guaquai'm looking at yours18:32
guaqualet's say you have 12 2 TB drives18:33
guaquaand you have 4 nodes18:33
guaquaand you put them into 4 zones18:33
zykes-4 servers yeah18:33
zykes-that's theoretically 4*12 nodes18:33
zykes-in "swift" terms18:33
guaquaso each zone consists of one physical server18:33
guaquai think a node is a server18:33
guaqua4 * 12 devices18:34
zykes-swift_node then ;)18:34
zykes-i only need 3 zones no ?18:34
guaquai'm not exactly sure how many one would like to have18:34
JesperA5 is recomended minimum but 3 would work18:35
zykes-i think 4 is "recommended" if i remember notmyname correctly18:35
zykes-5 is "optimal"18:35
zykes-:p18:35
guaquawhat makes 5 the desired count?18:35
*** dolphm has quit IRC18:35
*** krow has quit IRC18:39
*** mgoldmann has quit IRC18:40
*** cereal_bars has joined #openstack18:41
*** nerens has quit IRC18:43
*** po has joined #openstack18:49
*** bengrue has joined #openstack18:52
*** redconnection has joined #openstack18:52
*** clauden__ has quit IRC18:56
zykes-ooho18:59
zykes-cool thing to see the difference in pricing for a node with 48 tb18:59
zykes-contra 2418:59
zykes-guaqua: ?19:04
zykes-anyone here got calculations or so on what the formula is for the cost of a swift deployment ?19:07
*** javiF has quit IRC19:08
*** jkyle_ has quit IRC19:10
notmynamezykes-: what are you looking for beyond the cost of your hardware?19:10
zykes-notmyname: i'm trying to make cost of hardware + cost gb/month19:11
notmynameyour opex will be determined by your DC (or whatever) space and the cost of people to keep it running (including replacing broken hardware)19:12
JesperAimpossible to say if we dont know your hosting costs19:12
zykes-notmyname: yeah19:12
zykes-but firstly i'm doing gb pr month based on server purchase costs19:12
notmynameisn't any per month cost entirely dependent on your operational setup (including your hosting costs)?19:13
zykes-notmyname: correct19:13
zykes-but i'm doing just servers purchase costs atm19:13
notmynameah ok19:14
notmynameswitches + LBs + cabling + servers + drives19:14
notmynameno software licensing costs, though ;-)19:14
JesperA=)19:15
notmynamezykes-: cloudscaling gave a presentation about 6-8 months ago that was in the neighborhood of 600K - 750K per PB for initial cap ex costs19:16
zykes-600k notmyname ?19:16
notmynameat the santa clara design summit19:16
praefect_anybody remembers where I can find smosers latest images?19:16
Glaceenotmyname: hehe seems expensive..19:16
zykes-notmyname: we're (if we're doing this) a pretty basic setup19:16
notmynameGlacee: depends on what you compare it to. :-) I haven't priced multi-PB SANs recently, but I hear they are expensive19:17
Glacee1PB to sell or 1PB before the copy of 3 files?19:17
zykes-with 3 nodes of 24*2TB nodes with 2*gig for storage net pr node19:17
notmynameGlacee: 1PB billable19:17
Glaceenotmyname: ah ok :)19:17
colizykes: to be on the safe side with investment consider your first capex cost as your loss and concentrate on opex and ebidta19:19
Glaceenotmyname: im around that number with the setup we are planning.. but that is with the crazy drives price right now19:20
Glaceeeven lower than that... propably due to more density that the cloudscaling setup19:20
notmynameGlacee: that's good to hear (it means "normal" prices are cheaper)19:20
Glaceebut.. thats prototyping.. :) we are starting with a lot less than 1PB :019:21
notmynamedrives dominate the cost of the cluster as it gets bigger19:21
Glaceeand my concercn is HDD vibration.. we will see how it holds up19:21
Glaceeusing rubber and stuff on consumer grade drives :)19:22
notmynameit gets even better as 3TB drives get priced better19:22
Glaceewer using 3TB yeah19:22
zykes-notmyname: cross region replication19:22
Glaceethats propably where the difference is with the cloudscaling 2TB setup19:22
notmynameGlacee: ya19:23
zykes-is that planned notmyname ?19:23
notmynamezykes-: what about it?19:23
notmynamezykes-: define "region"19:23
zykes-"Large Single Uploads (Pending Approval)19:23
*** po has quit IRC19:23
zykes-+ "Multi-region support (Future - Not Started)"19:23
zykes-is a thing we would want :/19:23
notmynamezykes-: replication across a wide geographic area (ie with higher latency) is definitely on the "we need to figure this out" list19:24
notmynamezykes-: the large single objects handled  by the proxy rather than the client is planned, too. probably sooner than high latency replication (but I don't set the dev priorities, only try to argue what they should be)19:25
notmynamezykes-: we'd also be happy to review any patches submitted for these *hint*hint*19:26
Glaceenotmyname: the container replication between cluster.. I thought that was for the wide geo replication19:26
zykes-notmyname: how about say you have datacenter x then like 5-6 km away you have datacenter y ?19:26
zykes-is that "low latency" replication as well ?19:26
notmynameGlacee: ya, it's a start19:26
Glaceenotmyname:at least with that you can offer.. some kind of Multi-Region DR19:27
notmynamezykes-: ya, that's probably not an issue now (you may have to slightly adjust some timeout settings)19:27
zykes-ok19:27
zykes-what link is recommended ?19:27
Glaceewell I think.. does that include object replication between cluster?19:27
notmynamebetween the DCs? as big as possible ;-)19:27
zykes-notmyname: so 200 mbit isn't enough ? ;p19:27
notmynamezykes-: it could be. just depends on your use case :-)19:28
notmynameand how big you want your eventual consistency window to be19:28
zykes-is 3 zones sufficient for a start ?19:29
notmynameGlacee: container sync is a start to multi-geography clusters, but what I would like to see is one logical cluster that is able to span an ocean19:29
notmynamezykes-: only 3 zones doesn't give you any handoff zones in case of failure. I'd recommend starting with 4. 3 is minimum, 4 is better, 5+ is ideal19:30
Glaceenotmyname: ohh that is interesting.. a Multi-Region Cluster19:30
zykes-ah ok19:30
Glaceethats ambitious :)19:30
*** sdake has quit IRC19:30
notmynameindeed :-)19:30
*** arBmind has joined #openstack19:31
zykes-notmyname: is there a "list" of stuff going into essex ?19:32
Glaceehttps://blueprints.launchpad.net/swift19:34
Glaceethat would be a start zykes19:34
notmynamezykes-: not a complete on yet19:35
notmynameone19:35
Glaceenotmyname: are you going to LISA ?19:36
notmynamezykes-: I expect to have a few more details soon-ish. as I figure out what the various people using swift are working on19:36
notmynameGlacee: since I don't know what that is, I'm going to say "no"19:36
Glaceeok too bad :)19:37
Glaceehttp://www.usenix.org/events/lisa11/index.html19:37
notmynameah. I'll be traveling to San Francisco that week19:38
Glaceetoo bad19:38
*** dailylinux has joined #openstack19:46
guaquanotmyname: any idea where that presentation might have been? i'd like to see the slides... :)19:49
*** shaon has quit IRC19:50
*** GheRivero_ has joined #openstack19:51
notmynameguaqua: found it. http://joearnold.com/2011/04/28/openstack-conference-commercializing-object-storage-swift/19:53
guaquamassive! thanks!19:58
*** foexle has joined #openstack19:59
foexleahoi20:00
*** GheRivero_ has quit IRC20:10
*** GheRivero_ has joined #openstack20:10
*** koolhead17 has quit IRC20:12
*** catintheroof has joined #openstack20:12
*** miclorb_ has joined #openstack20:13
*** pixelbeat has quit IRC20:16
*** miclorb_ has quit IRC20:17
zykes-Glacee: which stuff is going in for sure atm ?20:18
zykes-notmyname: how long would you reckon to implement a 4 zone with 1 node pr zone setup ?20:18
notmynameyou mean how long to plug it all in? or to configure?20:20
*** cereal_bars has quit IRC20:24
zykes-all in all20:25
zykes-i wonder if ~20 hours20:25
zykes-or so20:25
JesperAproxy servers for Swift isnt harddrive intensive, right?20:27
notmynamezykes-: I don't think I can answer that for you20:27
JesperAEvery usefull info should already be loaded in RAM on the proxy?20:27
Glacee20hours.. to have a workable swift cluster... thats ambitious :020:27
notmynameJesperA: correct. CPU, RAM, network20:27
JesperAgood20:28
notmynameJesperA: the proxy doesn't cache objects20:28
Glaceezykes: for a production cluster.. from start to prod.. I am aiming 3-4months and I find it ambitious :020:29
JesperAno i know, but it store the storage node info so it "redirects" trafic to the nodes upon request? Maybe got that wrong20:29
*** negronjl has joined #openstack20:30
*** cereal_bars has joined #openstack20:30
zykes-Glacee: for "initial" config i've already done stuff20:31
zykes-Glacee: for a "working" initial setup that works i think20:31
*** debo-os has joined #openstack20:34
*** arBmind has quit IRC20:39
zykes-notmyname: what would happen to data if you have 2 zones in 1 place and 2 zones in one other data center20:39
zykes-and then link goes down between ?20:39
notmynamezykes-: should still work with no problem (assuming no other HDD failures, etc)20:40
*** dysinger has joined #openstack20:40
*** dysinger1 has joined #openstack20:43
zykes-notmyname: you know what immediate features are going in at essex ?20:44
*** dysinger has quit IRC20:45
*** coli has quit IRC20:46
notmynamezykes-: beyond the 1.4.4 changelog (from today), I can't tell you a specific list right now. however, I can say that our focus right now is around scale (both scaling up and scaling down) and polish (bugfixes and feature augmentation)20:46
notmynamefor example, I'd like it to be easier for smaller clusters to be deployed20:47
notmynameand I'd also like to see some features that allow for even bigger scale (like container sharding)20:47
notmynameas far as feature enhancements, I'd like to see stuff like automatic manifest creation on large objects20:48
zykes-notmyname: what about metadata ?20:48
notmynameand improvements on resource usage for internal processes (replication, auditors, etc)20:49
notmynamewhat about metadata?20:49
*** dysinger has joined #openstack20:49
*** dysinger1 has quit IRC20:49
zykes-searchable + filemetadata20:50
*** GheRivero_ has quit IRC20:50
Glaceenotmyname: what about object versioning? Is that something that is of any interest for you?20:50
notmynameGlacee: of limited interest. there are some technical challenges to making it work well on the server-side (not that that should be a blocker....) and it's easy to do _very_ well on the client side20:51
notmynameGlacee: so I'd like to see it, but it's lower on the priority list20:51
*** dysinger has quit IRC20:51
*** dysinger1 has joined #openstack20:51
Glaceenotmyname: good to know20:51
notmynamethe main focus now is polish (rather than adding new stuff)20:52
notmynameI've written about this here http://programmerthoughts.com/openstack/swift-state-of-the-project/20:52
zykes-notmyname: how hard is it to add filemetadata + search to it ?20:54
*** jkyle has joined #openstack20:54
notmynamezykes-: you mean sorting on arbitrary metadata? (we already support setting arbitrary metadata on objects)20:54
notmynameand by sorting, I'm specifically referring to the ordering of container listings20:55
GlaceeInteresting article.. thanks20:55
zykes-   notmyname aren't there other companies working on swift?20:57
notmynamezykes-: there are many companies deploying (and probably doing some internal dev). but there haven't been any large contributions to swift from outside of rackspace20:58
zykes-:/20:58
zykes-sad that nova is taking the starlight ;p20:58
*** jkyle has quit IRC20:58
notmynameindeed ;-)20:58
notmynameactually, I think it's great they get a lot of attention. lots of people are interested in the cloud, and most people think "compute" when the hear "cloud"20:59
notmynameI think storage is fundamental20:59
*** jmckenty has joined #openstack20:59
notmynamebut nova has different challenges to face. some of which are because it gets so much focus from a diverse group of people21:00
Glaceeagreed.. if I check thenew "cloud" provider in my region.. their cloud is compute only21:01
notmynamebut as swift PTL, I think part of my job is to do some amount of "tech evangelism" for swift so we do get more people involved in contributing code to it21:01
zykes-;)21:02
GlaceeI think that object storage is the foundation of a webscale application21:02
zykes-how is "future-large-single-uploads" handled now ?21:02
zykes-and "future-searchable-metadata"21:02
Glaceethats why we started our project with object storage instead of compute21:03
*** jmckenty has quit IRC21:03
GlaceeI think to have a real cloud.. you need both really21:03
notmynamezykes-: implementing either one of those is up to the client now. for example, the client has to split the object and create the manifest. or the client can maintain a separate metadata store about the obejcts21:03
zykes-k21:04
zykes-how is s3 doing it ?21:04
*** rustam has joined #openstack21:04
notmynameI'm not aware that they support either of those21:05
zykes-so if i wanted to store something larger then 5 gb today how is that handled ?21:06
*** rbp has joined #openstack21:06
notmynamezykes-: split the large object into chunks <5GB. upload them. then create a zero-byte file with the appropriate x-object-manifest header. when the manifest object is fetched, it will stream all the parts serially21:07
Glaceecontainers sharding.. interesting :)21:07
notmynamezykes-: http://swift.openstack.org/overview_large_objects.html21:07
notmynameGlacee: ya, that should eliminate the practical limitation of containers with high cardinality21:08
*** redconnection has quit IRC21:09
JesperAContainers works like folders right? If i want to look up the filesize in PHP i can provide the link to the file in PHP and it would calculate it? Or do i have to store that kind of values in the database? (stupid question, i know, but i have to be sure).21:09
JesperAFrom a webserver to Swift storage that is21:09
notmynameJesperA: containers are only sortof like folders (in that swift only sorof has filesystem similarities--swift isn't a filesystem). technically, a container is a namespace within an account21:11
notmynameJesperA: the best way to get the size of an object is to HEAD the object and look at the headers21:11
*** redconnection has joined #openstack21:12
*** dysinger1 has quit IRC21:12
JesperAnotmyname ok so there is no real path to an object?21:12
notmynameJesperA: depends on what you mean by "path". each object is referenced by a unique URL of the form <storagedomain>/v1/<account>/<container>/<object>21:13
*** coli has joined #openstack21:14
*** TheOsprey has quit IRC21:14
JesperAok, hmm, is it possible to symlink into that structure from the webserver?21:15
notmynameno. you can't mount swift21:15
notmynameyou may find a fuse layer for it, but it will have some serious performance limitations21:15
KiallJesperA: it sounds like you don't fully understand what Swift is.. Consider swift an web service API to store files.. Thats the only interface your application can know of..21:16
KiallThe files could be on a server on the other side of the world, local file system access to files is not possible with Swift.21:16
JesperAKiall yeah i know, but i was just looking for an easy way to move files from the webserver into the swift storage21:17
Kialla HTTP PUT request :)21:17
*** nacx has joined #openstack21:18
JesperAstill wouldnt work with alot of our PHP code21:19
JesperAbut i guess rewriting those is our smallest problem :P21:19
*** rsampaio has joined #openstack21:19
notmynameJesperA: have you seen the php language bindings for swift? (actually they are for rackspace cloud files, but they will work with any swift deployment)21:19
JesperANope i have not seen those21:20
JesperAhttps://github.com/notmyname/php-cloudfiles21:21
JesperA?21:21
notmynameJesperA: https://github.com/rackspace/php-cloudfiles21:21
notmynameya21:21
JesperAGreat, thanks21:25
*** TheOsprey has joined #openstack21:28
*** sdake has joined #openstack21:32
zykes-oh dang notmyname21:34
zykes-is 1.4.5 the current ?21:34
notmynamezykes-: as of today, yes :-)21:35
notmynamewell, 1.4.4 was released today21:35
notmynameso the code has 1.4.5-dev set at the version21:35
zykes-so that features is in the current version or ?21:35
notmynamewhich feature? the large objects?21:35
zykes-link you pasted21:35
*** PotHix has quit IRC21:36
*** PotHix has joined #openstack21:36
*** krow has joined #openstack21:37
JesperAnotmyname sorry, stupid question again, the API is needed when a webserver wants to delete files too?21:37
notmynamezykes-: ya, it's been in swift for a while. looking for the link21:37
zykes-notmyname: what's a "ring" and ring builder ?21:37
zykes-and recon21:38
notmynamezykes-: large objects was added almost exactly one year ago https://code.launchpad.net/~gholt/swift/lobjects4/+merge/4359621:38
notmynameJesperA: the only way to interact with swift is through the http api. the language bindings add some helper functions for that.21:39
notmynamezykes-: recon is a tool for deployers that allows swift to report on itself21:39
zykes-deployers meaning ?21:40
notmynamezykes-: recon http://swift.openstack.org/admin_guide.html#cluster-telemetry-and-monitoring21:40
notmynamezykes-: deployers == the person running the swift cluster21:40
JesperAnotmyname oh ok, we delete every files that is stored within 3 months but that wont be a problem using the http api then?21:41
notmynamezykes-: rings http://swift.openstack.org/overview_ring.html21:41
notmynameJesperA: no problem21:41
JesperAAwesome21:42
JesperAI love that these kind of stuff is open source, awesome job21:43
*** koolhead17 has joined #openstack21:43
notmynameJesperA: that's thanks to some very hard work by some execs at rackspace and nasa21:43
JesperAYeah, considering the hard work it is even more impressive that it is open source :P21:45
JesperAMust have taken a huge amount of time getting to the point where it is today21:45
*** sdake has quit IRC21:45
zykes-notmyname: can one do like a swift rebalance ?21:46
Kiallzykes-: sounds like MogileFS terms? Moving from MFS?21:46
zykes-Kiall: like to "balance" data21:46
zykes-or does it do that automagically21:47
notmynamezykes-: the data is automatically balanced throughout the cluster (and rebalances itself as you resize the cluster)21:47
zykes-ok21:48
zykes-so say you change a drive21:48
zykes-in node x of 4 zones and you do that for each node in each zone21:49
zykes-then it ottomatically scales up ?21:49
notmynameyup21:49
zykes-but then it's like a raid, you need to scale up at least 1 node pr zone ?21:50
KiallWas handed 5 old-ish but not too old servers today and asked to take anything useful out.. So far I have 30x 250GB SATA150 HD's -_- Waste of effort undoing 120 screws!21:50
notmynameit's a good idea to keep the zones the same size. you can either expand the zones or add new zones21:50
zykes-notmyname: but then21:53
zykes-say you do like you some times do in a riad21:53
zykes-you got "working" existing hardware but with small disks21:53
notmynameand you want to upgrade disks or add larger disks21:53
notmyname?21:53
notmynameno problem21:53
zykes-and the baseline hardware takes +X tb disks contra the ones you have21:53
zykes-that's what i meant21:53
zykes-what commands etc do you use ?21:54
*** cereal_bars has quit IRC21:54
*** miclorb_ has joined #openstack21:54
*** catintheroof has quit IRC21:54
notmynameas you are adding devices to the ring, set the weight appropriately. a good start is to set the weight to the number of GB in each drive. for example, a 2TB drive can have a weight of 2000 and a 3 TB drive has a weight of 300021:55
notmynamethe weights don't mean anything except in relation to one another21:55
notmynameand it's used to ensure that heterogeneous drives grow evenly21:55
notmynameit also allows you do slowly fill or drain a particular drive (or group of drives) by slowly raising or lowering the associated weight21:56
zykes-ah21:56
notmynamefor example, when we add zones at rackspace, we add them over a period of time by raising the weights21:57
zykes-notmyname: which frontends do you know of for swift ?21:57
notmyname25%, 50%, 75%, 100%21:57
jasonahmm21:57
zykes-that's when you add a zone like cluster?21:58
jasonamorning!21:58
*** sdake has joined #openstack21:58
*** foexle has quit IRC21:58
notmynamezykes-: cyberduck, one or more iOS apps,21:58
zykes-smestorage ?21:59
notmynamezykes-: ya, when we add zones to existing clusters21:59
*** cloudgeek has quit IRC21:59
zykes-hmm, pardon but what's the diff on a cluster and a zone ?22:00
notmynamecluster == many zones22:00
notmynamea zone is just a partition of availability in your deployment22:01
notmynameperhaps it is an isolated DC room. or set of cabinets with a separate power supply22:01
notmynamethat is highly dependent on your deployment details22:02
*** nati2 has joined #openstack22:02
Glaceenotmyname: questions about rings.. lets say that your initial setup contained a certain amount of partitions when you created the ring.. let say that you ring grows that each device has 100 partitions.. your screwed?22:04
*** ChrisAM has quit IRC22:04
jasonaasking the same q as before (so the rest of you can ignore me ;-) but.. anyone here have a purchasing spec for openstack hardware ?22:04
Glaceejasona: depends on your use case :022:04
jasonaglacee: development node for university researchers.22:05
notmynameGlacee: yes :-) at least only without a _lot_ of effort on your part22:05
jasonalooking to run about 200-300 VMs22:05
Glaceehehe.. at rackspace.. you propably set the partitions number at a very high number?22:05
jasonaa mix of 2-4-8 core VMs. between 4 and 16G of ram each mostly.22:05
*** zykes- has quit IRC22:05
notmynameGlacee: changing your partition power would require that you rehash all of the data in the cluster. that means you have to migrate it all (GET(old ring)+PUT(new ring))22:05
*** edolnx has quit IRC22:06
*** vipul_ has joined #openstack22:06
Kialljasona: rackspace have a published "Reference Architecture".. might be of use for you.. http://www.referencearchitecture.org/hardware-specifications/22:06
jasonai had a lok22:06
*** ChrisAM1 has joined #openstack22:06
*** vipul_ has quit IRC22:06
*** Xenith has quit IRC22:06
jasonait is sort of useful but kinda mostly if you want to buy dell. which is why i was hoping to get some more feedback :)22:06
jasonai was looking for more generic feedback from people22:07
Kiallreally though, its hard to put any numbers on it without knowing the workload etc etc..22:07
Glaceenotmyname: yeah thats what I thought.. I will put a crazy number to start with and see how it reacts :022:07
*** pasik has quit IRC22:07
notmynamejasona: I'mnot sure who came up with that reference architecture list22:07
*** edolnx has joined #openstack22:07
jasonawell, i would define the workload as mixed use, small to medium. i was figuring on 3-4 compute nodes, some object storage etc22:07
*** pasik has joined #openstack22:07
*** Xenith has joined #openstack22:07
jasonai.e the point is to have a working openstack cluster that we can give researchers to actually do some work on22:08
jasonarather than hand creating all the KVM machines they need22:08
jasonawe're going to give them about 200-500T of storage to do stuff with alongside that. (most of which is not swift)22:08
notmynameGlacee: don't go too big22:08
*** lionel has quit IRC22:08
Kialljasona: really, thats not the workload.. the workload is more along the lines of what are they using those VMs for.. is it I/O intensive? RAM intensive? CPU intensive etc etc...22:08
jasonaahhh22:08
notmynameGlacee: you should be able to come up with a reasonable number22:08
*** lionel has joined #openstack22:09
Glaceenotmyname: thanks for the advise.. would 2^30 is reasonable or Im crazy?22:09
KiallAt the end of the day, its that sort of information which will tell you what hardware to buy...22:09
notmynamethat's _huge_22:09
jasonait's genomics researchers ? :) yes they like more ram. cpu not so much. i/o intensive yes but only in moving large amounts of data22:09
Glaceehahah yeah thats what I thought :)22:09
jasonai.e if they need to move 100G files.. and a few T of data to solve basic problems. but they aren't generating large i/o loads other than as workflow22:09
notmynameGlacee: ya, that allows you to have 10737418 storage volumes22:09
notmynamethat's almost 200K 60-node servers22:10
Glaceelol yeah22:10
notmynameI somehow doubt you'll get a cluster that big22:10
jasonaunless glacee works for blizzard ? :)22:11
Kialljasona: you're not going to be able to articulate the workloads in an IRC chat ;) And the end of the day, you need to figure out what the users will be doing (in terms of CPU/RAM/disk I/O/network I/O etc) then size the hardware to handle that...22:11
Glaceejasona: hell no.. I would be a shame now their panderia release.. what a joke22:11
jasonakiall: the users can't give me that now22:12
Kiallthen you cant size the hardware accurately :)22:12
jasonakiall: and i have to take a stab at close enough, since this has to be ordered in 3 weeks..22:12
jasonaor it sets of a chain reaction that kills a bunch of project stuff :)22:12
jasonamaybe i can't do it accurately. can i try for 'in the same city' even if i can't get 'in the same ballpark' ?22:13
Kialllol, then you need to get in front of them and annoy them until they tell you what they need ;)22:13
Glaceeis 2^22 more reasonable or still crazy in your openion?22:13
notmynameGlacee: partition power of 30 would allow you to have a cluster with nearly 5000PB of _billable_ storage (at 80% full)22:13
notmynameassuming you use 2TB drives22:14
jasonakiall: they honestly can't give me more and they have no incentive to do that anyway.22:14
Glaceeyeah.. I realised that it was a crazy number after posting to the channel :)22:14
jasonathe cluster being built is being built partly to get them to look at this stuff and use it22:14
*** cloudgeek has joined #openstack22:14
jasonai.e get them interested in openstack and using the paas in the future. rather than the jillion small clusters around the place22:14
notmynameGlacee: 20 or 22 is quite reasonable for large clusters22:14
jasonaso with that in mind, taking a median approach, more feedback on hardware spec ? :)22:15
Kialljasona: then really, all you can do is guess.. start small. Since this is "get them to look at this stuff and use it", start with as little hardware as possible. Then you'll see what people really use.22:15
Glaceeyeah 41PB of billable storage with 2^2222:16
notmynameGlacee: 22 gives you 19PB of billable storage with 2TB drives22:16
Glaceeusing 3TB drives22:16
notmynameheh22:16
Glaceehmm22:16
notmyname2**22 / 100 * 2000 * .91 * .8 / 3 / 2**2022:16
notmyname.91 is marketing to actual formatted size22:16
notmyname.8 is 80% full22:17
notmyname3 for replica count22:17
notmyname2**20 to convert from GB to PB22:17
notmyname.91 is ok for 2TB drives. it will be different for 3TB drives22:17
Glaceethanks.. I will keep that formula.. handy...22:17
Glaceeok I will check with 3TB22:18
Kialljasona: eg start with closer to commodity hardware.. eg 1Gb ethernet not 10g, you'll quickly see if there really is a need for 10g, of if more CPU is needed, that way - you have the budget left to upgrade etc22:18
notmyname3TB marketing == 3000000000 bytes unformatted. format and convert to bast 2 measurements to get the proper ratio22:18
*** nati2 has quit IRC22:20
Glaceefrom a few website it seems like .91 also for 3TB22:21
*** nati2 has joined #openstack22:21
GlaceeFormatted capacity 2.72TB22:21
notmynameVl2**22 / 100 * 2794 * .8 / 3 / 2**20 for 3TB (2794 is number of GB in a 3TB drive)22:21
*** rbp has left #openstack22:22
notmyname2**22 / 100 * 2794 * .8 / 3 / 2**20 = 29.80263824462891PB billable22:22
notmynameah ok. 2.72 formatted22:22
notmynamestill. 29PB22:22
Glacee52M/year at 0.15gb22:25
Glaceenot bad :022:25
notmynameheh :-)22:25
JesperAHow big is the biggest implementation of Swift?22:26
JesperA(biggest knowned)22:26
notmynameJesperA: "billions of files, petabytes of data" (unfortunately, that's all rackspace let's me say)22:26
KiallI would imagine Rackspace, but I doubt they give specifics...22:27
notmynamebut our clusters are larger than all the other published numbers I've seen22:27
Glaceeif they cluster reach that capacity.. thats what I call a Champagne problem :022:27
Glaceethe*22:27
JesperA=)22:27
notmynameindeed22:27
Glaceedo you think that 2^22 my be too slow if we start with around 140 devices?22:28
Glaceemay*22:28
notmynameGlacee: the slowdowns associated with a larger ring size only come with updating the ring (which is done offline)22:29
notmynameGlacee: the other worry is the extra filesystem overhead for all the directory entries22:29
Glaceeok22:30
Glaceethanks again for your help.. that was instructive.. heading out22:30
notmynamehave a good day22:31
Glaceeyou too thanks22:31
*** Guest68173 has quit IRC22:32
*** nacx has quit IRC22:53
*** hugokuo has joined #openstack22:56
colinotmyname: are you working in UK or US ?22:56
jasonakiall: i'm definitely looking at commodity for kit. just trying to differentiate22:56
jasonakiall: wondering how other people pick between dell vs hp vs ibm vs..22:56
colikiall: hi, in my opinnion your script needs change in ec2_dmz_host parametr in nova.conf , it should point to local compute node where nova-api is running and not to controler. if it points to a controler then instance is unable to communicate with 169.254.169.254 (when compute node is on different host then controler)22:57
colijason: we use supermicro for years and didn't havy much problems. softlayer is using them as well in large numbers.22:58
colikiall: to be specific instance is able to communicate with 169.254.169.254 (that is nova-api) but nova-api is unable to retrive metadata as the packets arrive with source address of the compute node and not of the instance.22:59
tjoyjasona: supermicro does make good gear23:01
jasonano argument there. but if they don't supply through existing gov contracts etc.. probably can't use them23:02
notmynamecoli: us23:03
tjoycoli: isn't 169.254.whatever a local address like 127.0.0.1 ? am i missing something important?23:04
colitjoy: 169.254.169.254 is being DNAT'ed to value of ec2_dmz_host and port 8873 (default unless set by parameter)23:05
coliupps port 877323:06
*** MarkAtwood has quit IRC23:12
*** woleium has quit IRC23:12
*** camm has quit IRC23:12
*** webx has quit IRC23:12
*** nelson1234 has quit IRC23:12
*** clayg has quit IRC23:12
*** lucas has quit IRC23:12
*** cloud0_ has quit IRC23:12
*** guaqua has quit IRC23:12
*** medberry has quit IRC23:12
*** Spirilis has quit IRC23:12
*** Eyk^off has quit IRC23:12
*** zz_bonzay has quit IRC23:12
*** Aurelgadjo has quit IRC23:12
*** Aim has quit IRC23:12
*** miclorb_ has quit IRC23:12
*** al has quit IRC23:12
*** PiotrSikora has quit IRC23:12
*** Pommi has quit IRC23:12
*** cdub has quit IRC23:12
*** agy has quit IRC23:12
*** vidd-away has quit IRC23:12
*** Kiall has quit IRC23:12
*** pquerna has quit IRC23:12
*** aimka has quit IRC23:12
*** anticw has quit IRC23:12
*** iRTermite has quit IRC23:12
*** opsnare has quit IRC23:12
*** Vek has quit IRC23:12
*** termie has quit IRC23:12
*** cclien has quit IRC23:12
*** martin has quit IRC23:12
*** blahee has quit IRC23:12
*** olafont_ has quit IRC23:12
*** akscram has quit IRC23:12
*** cw has quit IRC23:12
*** krow has quit IRC23:12
*** rustam has quit IRC23:12
*** wariola has quit IRC23:12
*** dubenstein has quit IRC23:12
*** rods has quit IRC23:12
*** JStoker has quit IRC23:12
*** HugoKuo_ has quit IRC23:12
*** ollie1 has quit IRC23:12
*** sticky has quit IRC23:12
*** shang has quit IRC23:12
*** obino has quit IRC23:12
*** nid0 has quit IRC23:12
*** AntoniHP has quit IRC23:12
*** andyandy_ has quit IRC23:12
*** sloop has quit IRC23:12
*** Lumiere has quit IRC23:12
*** russellb has quit IRC23:12
*** martines has quit IRC23:12
*** j^2 has quit IRC23:12
*** datajerk has quit IRC23:12
*** agoddard has quit IRC23:12
*** floehmann has quit IRC23:12
*** cmagina has quit IRC23:12
*** root_ has quit IRC23:12
*** mencken has quit IRC23:12
*** Daviey has quit IRC23:12
*** dendro-afk has quit IRC23:12
*** Hunner has quit IRC23:12
*** royh has quit IRC23:12
*** hugokuo has quit IRC23:12
*** cloudgeek has quit IRC23:12
*** alekibango has quit IRC23:12
*** jsh has quit IRC23:12
*** dgags has quit IRC23:12
*** binbash_ has quit IRC23:12
*** cburgess has quit IRC23:12
*** n0ano has quit IRC23:12
*** benner has quit IRC23:12
*** keekz has quit IRC23:12
*** kirkland has quit IRC23:12
*** perlstein has quit IRC23:12
*** rwmjones has quit IRC23:12
*** jbarratt_ has quit IRC23:12
*** uvirtbot has quit IRC23:12
*** troytoman-away has quit IRC23:12
*** Xenith has quit IRC23:12
*** pasik has quit IRC23:12
*** edolnx has quit IRC23:12
*** ChrisAM1 has quit IRC23:12
*** koolhead17 has quit IRC23:12
*** tryggvil_ has quit IRC23:12
*** map_nw has quit IRC23:12
*** odyi has quit IRC23:12
*** chmouel has quit IRC23:12
*** arun has quit IRC23:12
*** mu574n9 has quit IRC23:12
*** kerouac has quit IRC23:12
*** phschwartz has quit IRC23:12
*** gondoi has quit IRC23:12
*** WormMan has quit IRC23:12
*** carlp has quit IRC23:12
*** ahale has quit IRC23:12
*** superbobry has quit IRC23:12
*** vishy has quit IRC23:12
*** nijaba has quit IRC23:12
*** kodapa_ has quit IRC23:12
*** no`x has quit IRC23:12
*** hggdh has quit IRC23:12
*** paltman has quit IRC23:12
*** GheRivero has quit IRC23:12
*** errr has quit IRC23:13
*** morellon has quit IRC23:13
*** fujin has quit IRC23:13
*** laurensell has quit IRC23:13
*** ryan_fox1985 has quit IRC23:13
*** ogelbukh has quit IRC23:13
*** mirrorbox has quit IRC23:13
*** markwash has quit IRC23:13
*** aurigus has quit IRC23:13
*** kpepple has quit IRC23:13
*** johnmark has quit IRC23:13
*** ashp has quit IRC23:13
*** lool has quit IRC23:13
*** villep has quit IRC23:13
*** DanF has quit IRC23:13
*** dotplus has quit IRC23:13
*** ivoks has quit IRC23:13
*** redconnection has quit IRC23:13
*** JesperA has quit IRC23:13
*** zul has quit IRC23:13
*** jeblair has quit IRC23:13
*** doude has quit IRC23:13
*** andyandy has quit IRC23:13
*** DuncanT has quit IRC23:13
*** snowboarder04 has quit IRC23:13
*** tjikkun has quit IRC23:13
*** blamar has quit IRC23:13
*** comstud has quit IRC23:13
*** nilsson has quit IRC23:13
*** ke4qqq has quit IRC23:13
*** dabo has quit IRC23:13
*** kodapa has quit IRC23:13
*** pfibiger has quit IRC23:13
*** tjoy has quit IRC23:13
*** hyakuhei has quit IRC23:13
*** jasona has quit IRC23:13
*** romans has quit IRC23:13
*** clayg_ is now known as clayg23:13
*** cloudgeek has joined #openstack23:13
*** Xenith has joined #openstack23:13
*** pasik has joined #openstack23:13
*** edolnx has joined #openstack23:13
*** ChrisAM1 has joined #openstack23:13
*** miclorb_ has joined #openstack23:13
*** koolhead17 has joined #openstack23:13
*** krow has joined #openstack23:13
*** rustam has joined #openstack23:13
*** tryggvil_ has joined #openstack23:13
*** wariola has joined #openstack23:13
*** alekibango has joined #openstack23:13
*** dubenstein has joined #openstack23:13
*** rods has joined #openstack23:13
*** JStoker has joined #openstack23:13
*** HugoKuo_ has joined #openstack23:13
*** ollie1 has joined #openstack23:13
*** map_nw has joined #openstack23:13
*** sticky has joined #openstack23:13
*** shang has joined #openstack23:13
*** odyi has joined #openstack23:13
*** obino has joined #openstack23:13
*** nid0 has joined #openstack23:13
*** floehmann has joined #openstack23:13
*** al has joined #openstack23:13
*** jsh has joined #openstack23:13
*** dgags has joined #openstack23:13
*** PiotrSikora has joined #openstack23:13
*** AntoniHP has joined #openstack23:13
*** chmouel has joined #openstack23:13
*** superbobry has joined #openstack23:13
*** arun has joined #openstack23:13
*** binbash_ has joined #openstack23:13
*** Pommi has joined #openstack23:13
*** hggdh has joined #openstack23:13
*** cdub has joined #openstack23:13
*** paltman has joined #openstack23:13
*** agy has joined #openstack23:13
*** cburgess has joined #openstack23:13
*** mu574n9 has joined #openstack23:13
*** andyandy_ has joined #openstack23:13
*** sloop has joined #openstack23:13
*** kerouac has joined #openstack23:13
*** GheRivero has joined #openstack23:13
*** Lumiere has joined #openstack23:13
*** russellb has joined #openstack23:13
*** n0ano has joined #openstack23:13
*** benner has joined #openstack23:13
*** vidd-away has joined #openstack23:13
*** martines has joined #openstack23:13
*** j^2 has joined #openstack23:13
*** datajerk has joined #openstack23:13
*** agoddard has joined #openstack23:13
*** phschwartz has joined #openstack23:13
*** gondoi has joined #openstack23:13
*** pquerna has joined #openstack23:13
*** cmagina has joined #openstack23:13
*** keekz has joined #openstack23:13
*** errr has joined #openstack23:13
*** kirkland has joined #openstack23:13
*** WormMan has joined #openstack23:13
*** perlstein has joined #openstack23:13
*** rwmjones has joined #openstack23:13
*** carlp has joined #openstack23:13
*** morellon has joined #openstack23:13
*** ahale has joined #openstack23:13
*** fujin has joined #openstack23:13
*** Aurelgadjo has joined #openstack23:13
*** aimka has joined #openstack23:13
*** laurensell has joined #openstack23:13
*** ryan_fox1985 has joined #openstack23:13
*** vishy has joined #openstack23:13
*** root_ has joined #openstack23:13
*** mencken has joined #openstack23:13
*** Aim has joined #openstack23:13
*** jbarratt_ has joined #openstack23:13
*** ogelbukh has joined #openstack23:13
*** nijaba has joined #openstack23:13
*** anticw has joined #openstack23:13
*** iRTermite has joined #openstack23:13
*** dotplus has joined #openstack23:13
*** Daviey has joined #openstack23:13
*** uvirtbot has joined #openstack23:13
*** mirrorbox has joined #openstack23:13
*** olafont_ has joined #openstack23:13
*** akscram has joined #openstack23:13
*** blahee has joined #openstack23:13
*** martin has joined #openstack23:13
*** cclien has joined #openstack23:13
*** termie has joined #openstack23:13
*** Vek has joined #openstack23:13
*** opsnare has joined #openstack23:13
*** cw has joined #openstack23:13
*** royh has joined #openstack23:13
*** Hunner has joined #openstack23:13
*** dendro-afk has joined #openstack23:13
*** markwash has joined #openstack23:13
*** troytoman-away has joined #openstack23:13
*** kodapa_ has joined #openstack23:13
*** no`x has joined #openstack23:13
*** aurigus has joined #openstack23:13
*** kpepple has joined #openstack23:13
*** ashp has joined #openstack23:13
*** johnmark has joined #openstack23:13
*** lool has joined #openstack23:13
*** villep has joined #openstack23:13
*** DanF has joined #openstack23:13
*** ivoks has joined #openstack23:13
*** zelazny.freenode.net sets mode: +v dendro-afk23:13
*** pixelbeat has joined #openstack23:14
*** redconnection has joined #openstack23:15
*** JesperA has joined #openstack23:15
*** zul has joined #openstack23:15
*** jeblair has joined #openstack23:15
*** doude has joined #openstack23:15
*** andyandy has joined #openstack23:15
*** DuncanT has joined #openstack23:15
*** snowboarder04 has joined #openstack23:15
*** tjikkun has joined #openstack23:15
*** blamar has joined #openstack23:15
*** comstud has joined #openstack23:15
*** nilsson has joined #openstack23:15
*** ke4qqq has joined #openstack23:15
*** dabo has joined #openstack23:15
*** kodapa has joined #openstack23:15
*** pfibiger has joined #openstack23:15
*** tjoy has joined #openstack23:15
*** hyakuhei has joined #openstack23:15
*** jasona has joined #openstack23:15
*** romans has joined #openstack23:15
*** clayg is now known as 15SAADNDC23:15
*** Kiall_ has joined #openstack23:15
*** MarkAtwood has joined #openstack23:15
*** woleium has joined #openstack23:15
*** camm has joined #openstack23:15
*** webx has joined #openstack23:15
*** nelson1234 has joined #openstack23:15
*** clayg has joined #openstack23:15
*** lucas has joined #openstack23:15
*** cloud0_ has joined #openstack23:15
*** guaqua has joined #openstack23:15
*** medberry has joined #openstack23:15
*** Spirilis has joined #openstack23:15
*** Eyk^off has joined #openstack23:15
*** zz_bonzay has joined #openstack23:15
*** JesperA has quit IRC23:15
*** JesperA has joined #openstack23:15
*** medberry is now known as Guest91323:16
*** Kiall_ is now known as Guest7957823:16
*** Pommi has quit IRC23:16
*** Guest79578 has quit IRC23:16
*** Guest79578 has joined #openstack23:16
*** Guest79578 is now known as Kiall23:16
JesperAAnyone knows if it is possible to make HTTP requests to a Dell EqualLogic array?23:19
*** phschwartz has quit IRC23:22
*** phschwartz has joined #openstack23:22
*** krish has joined #openstack23:24
krishhey guys23:24
*** dailylinux has quit IRC23:24
*** Pommi has joined #openstack23:28
krishhi im trying to restart nova network23:30
krishand it fails with error23:30
krishanyone interested to see a pastie of it ? :)23:30
coliwhat does the error say ?23:31
*** krish has quit IRC23:34
*** debo-os has quit IRC23:46
*** zykes- has joined #openstack23:54
*** pixelbeat has quit IRC23:55
*** bengrue has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!