*** pradeep has quit IRC | 00:00 | |
*** RobertLaptop has left #openstack | 00:03 | |
*** perestrelka has quit IRC | 00:03 | |
*** perestrelka has joined #openstack | 00:04 | |
*** hallyn has quit IRC | 00:07 | |
*** jdg has quit IRC | 00:07 | |
*** martine has quit IRC | 00:09 | |
coli | kiall: have another issue with your scripts. it seems to me it's script related. | 00:10 |
---|---|---|
coli | kiall: vm starting tries to connect to port 80 on 169.254.169.254 | 00:10 |
*** mattstep has quit IRC | 00:11 | |
coli | kiall: iptables using dnat changes destination address to port 8773 on management node (8773 it's nova-api) | 00:11 |
coli | kiall: however the iptables also use SNAT to change the source address to compute nodes real public ip address | 00:12 |
*** afm has joined #openstack | 00:13 | |
coli | kiall: nova-api on management node receives the request, treats it as received from real public ip address and searches the database for metadata for fixed_ip of real public ip address of compute node instead of vm's fixed ip | 00:13 |
coli | kiall: shouldn't the iptables only use DNAT to "redirect" to nova-api on compute node and not on managment node ? and avoid SNAT as well ? | 00:15 |
Kiall | IRC keeps popping over my movie ;) Damn you! | 00:16 |
coli | I do apologise then and shut myself up :-) | 00:16 |
*** guigui1 has quit IRC | 00:16 | |
Kiall | Anyway - nova itself sets all those rules up, so either its something missing/wrong in nova.conf, or a bug in nova.. :) | 00:17 |
* Kiall gets back to his film ;) | 00:17 | |
*** ldlework has quit IRC | 00:18 | |
*** mattstep has joined #openstack | 00:20 | |
*** afm1 has joined #openstack | 00:21 | |
*** ben- has joined #openstack | 00:22 | |
*** maplebed is now known as Guest89006 | 00:23 | |
*** Guest89006 has quit IRC | 00:24 | |
*** afm has quit IRC | 00:24 | |
*** ben- is now known as maplebed | 00:25 | |
*** maplebed has joined #openstack | 00:25 | |
*** afm1 has quit IRC | 00:25 | |
*** cereal_bars has quit IRC | 00:27 | |
*** rustam has quit IRC | 00:27 | |
*** rustam has joined #openstack | 00:27 | |
*** mattstep_ has joined #openstack | 00:28 | |
*** janpy has joined #openstack | 00:30 | |
*** mattstep has quit IRC | 00:30 | |
*** mattstep_ is now known as mattstep | 00:30 | |
*** rustam has quit IRC | 00:35 | |
*** rustam has joined #openstack | 00:35 | |
*** mattstep has quit IRC | 00:36 | |
*** sandywalsh has quit IRC | 00:37 | |
*** mattstep has joined #openstack | 00:37 | |
*** mattstep has quit IRC | 00:42 | |
*** adjohn has joined #openstack | 00:44 | |
*** adjohn has quit IRC | 00:44 | |
*** mattstep has joined #openstack | 00:49 | |
*** mattstep has quit IRC | 00:50 | |
*** deshantm_laptop has joined #openstack | 00:50 | |
*** theocjr has joined #openstack | 00:50 | |
*** rnorwood has quit IRC | 00:51 | |
*** livemoon has joined #openstack | 00:54 | |
*** MarkAtwood has quit IRC | 00:55 | |
*** quake has joined #openstack | 00:57 | |
*** quake has quit IRC | 01:01 | |
*** neotrino has quit IRC | 01:05 | |
*** nati2_ has joined #openstack | 01:08 | |
*** stanchan has quit IRC | 01:19 | |
*** rustam has quit IRC | 01:22 | |
*** nati2 has quit IRC | 01:22 | |
*** swill has joined #openstack | 01:22 | |
*** theocjr has quit IRC | 01:22 | |
livemoon | morninf | 01:22 |
*** cmasseraf has quit IRC | 01:26 | |
*** cmasseraf has joined #openstack | 01:26 | |
*** dolphm has joined #openstack | 01:34 | |
*** nati2 has joined #openstack | 01:34 | |
uvirtbot | New bug: #894218 in nova "the instance'ip lease time in DHCPflat mode" [Undecided,New] https://launchpad.net/bugs/894218 | 01:35 |
*** pixelbeat has quit IRC | 01:36 | |
*** nati2_ has quit IRC | 01:37 | |
*** andreas__ has quit IRC | 01:38 | |
*** debo-os has joined #openstack | 01:43 | |
livemoon | hi, nova-compute not running since of libvirtd-bin, Has anyone meet this issue? | 01:43 |
*** maplebed has quit IRC | 01:45 | |
_rfz | livemoon - the make sure Bios is enabled error? | 01:46 |
*** obino has quit IRC | 01:47 | |
livemoon | _rfz: I mean nova-compute can running for some time, maybe one days or two days | 01:48 |
*** rods has quit IRC | 01:48 | |
livemoon | then nova-compute will stop update host status, and it seems nova-compute is not running. | 01:48 |
livemoon | I need kill nova-compute and restart libvirtd-bin and start nova-compute | 01:49 |
_rfz | livemoon, I haven't seen that error | 01:49 |
*** pradeep1 has joined #openstack | 01:50 | |
*** rackerhacker has quit IRC | 01:50 | |
livemoon | oh | 01:50 |
*** tsuzuki_ has joined #openstack | 01:50 | |
*** nati2 has quit IRC | 01:52 | |
*** sdake has quit IRC | 01:52 | |
*** nati2 has joined #openstack | 01:52 | |
*** rackerhacker has joined #openstack | 01:54 | |
*** debo-os has quit IRC | 01:54 | |
*** emid has joined #openstack | 01:55 | |
*** dragondm has joined #openstack | 01:55 | |
*** rackerhacker has quit IRC | 01:55 | |
*** rackerhacker has joined #openstack | 01:55 | |
*** 36DAAU429 has joined #openstack | 01:55 | |
*** troya has joined #openstack | 01:55 | |
*** debo-os has joined #openstack | 01:55 | |
*** troya has quit IRC | 01:58 | |
*** debo-os has quit IRC | 02:01 | |
*** vladimir3p has quit IRC | 02:02 | |
*** nati2_ has joined #openstack | 02:04 | |
*** nati2 has quit IRC | 02:06 | |
*** rackerhacker has quit IRC | 02:06 | |
*** rnorwood has joined #openstack | 02:07 | |
*** rsampaio has joined #openstack | 02:08 | |
*** sdake has joined #openstack | 02:09 | |
*** troya has joined #openstack | 02:15 | |
*** bengrue has quit IRC | 02:17 | |
*** jdurgin has quit IRC | 02:18 | |
*** debo-os has joined #openstack | 02:20 | |
troya | hi all | 02:21 |
*** debo-os has quit IRC | 02:25 | |
*** jkyle has quit IRC | 02:34 | |
*** mattstep has joined #openstack | 02:35 | |
*** mwhooker has quit IRC | 02:35 | |
*** dolphm has quit IRC | 02:38 | |
*** n8 has joined #openstack | 02:48 | |
*** n8 is now known as Guest35269 | 02:48 | |
*** emid has quit IRC | 02:52 | |
*** dolphm has joined #openstack | 02:53 | |
*** shang has quit IRC | 02:53 | |
*** osier has joined #openstack | 03:01 | |
*** nati2_ has quit IRC | 03:04 | |
*** nati2 has joined #openstack | 03:05 | |
*** shang has joined #openstack | 03:10 | |
*** Guest35269 has quit IRC | 03:14 | |
*** dpippenger has quit IRC | 03:15 | |
*** obino has joined #openstack | 03:19 | |
*** troya has quit IRC | 03:21 | |
*** sdake has quit IRC | 03:23 | |
*** negronjl has joined #openstack | 03:23 | |
*** vipul_ has joined #openstack | 03:25 | |
*** troya has joined #openstack | 03:26 | |
*** pradeep1 has quit IRC | 03:29 | |
*** sandywalsh has joined #openstack | 03:30 | |
*** shang has quit IRC | 03:33 | |
*** woleium has quit IRC | 03:33 | |
coli | kiall: I'm positive that your nova.conf on compute nodes is missing dmz_cidr and ec2_dmz_host has wrong value :-) | 03:34 |
coli | kiall: will explain tomorrow, going to sleep now :-) | 03:34 |
troya | hi coli | 03:36 |
coli | hi and bye ;-0 | 03:36 |
*** sdake has joined #openstack | 03:37 | |
*** sandywalsh has quit IRC | 03:37 | |
troya | bye coli | 03:40 |
*** rackerhacker has joined #openstack | 03:44 | |
*** n8 has joined #openstack | 03:48 | |
*** n8 is now known as Guest9399 | 03:48 | |
*** rnorwood has quit IRC | 03:51 | |
*** rnorwood has joined #openstack | 03:53 | |
*** MarkAtwood has joined #openstack | 03:53 | |
*** rackerhacker has quit IRC | 03:56 | |
*** map_nw_ has joined #openstack | 03:56 | |
*** deshantm_laptop has quit IRC | 03:58 | |
*** map_nw has quit IRC | 03:58 | |
*** pradeep1 has joined #openstack | 04:01 | |
*** woleium has joined #openstack | 04:02 | |
*** cmasseraf has quit IRC | 04:02 | |
*** deshantm_laptop has joined #openstack | 04:02 | |
*** deshantm_laptop has quit IRC | 04:03 | |
*** MarkAtwood has quit IRC | 04:15 | |
*** debo-os has joined #openstack | 04:17 | |
*** nati2_ has joined #openstack | 04:18 | |
*** nati2 has quit IRC | 04:20 | |
*** koolhead17 has quit IRC | 04:26 | |
*** tsuzuki_ has quit IRC | 04:38 | |
*** DavorC has joined #openstack | 04:39 | |
*** nati2_ has quit IRC | 04:40 | |
*** nati2 has joined #openstack | 04:40 | |
*** nati2 has quit IRC | 04:40 | |
*** nati2 has joined #openstack | 04:40 | |
*** nati2 has quit IRC | 04:41 | |
*** nati2 has joined #openstack | 04:41 | |
*** rsampaio has quit IRC | 04:43 | |
*** MarkAtwood has joined #openstack | 04:45 | |
*** rnorwood has quit IRC | 04:46 | |
*** abecc has quit IRC | 04:51 | |
*** rnorwood has joined #openstack | 04:52 | |
*** hadrian has quit IRC | 04:59 | |
*** vipul_ has quit IRC | 04:59 | |
*** supriya has joined #openstack | 05:00 | |
*** DavorC has quit IRC | 05:01 | |
*** rackerhacker has joined #openstack | 05:11 | |
*** mjfork has quit IRC | 05:13 | |
*** rackerhacker has quit IRC | 05:22 | |
*** debo-os has quit IRC | 05:25 | |
*** cp16net has quit IRC | 05:27 | |
*** YSPark has joined #openstack | 05:35 | |
*** koolhead17 has joined #openstack | 05:36 | |
YSPark | When the VM image is delivered by Glance, Is this image located in Local Server? | 05:37 |
*** nerens has joined #openstack | 05:37 | |
*** cp16net has joined #openstack | 05:37 | |
YSPark | Is this copied to local server? | 05:37 |
YSPark | ?? | 05:38 |
*** YSPark_ has joined #openstack | 05:39 | |
*** pradeep1 has quit IRC | 05:41 | |
*** cp16net_ has joined #openstack | 05:43 | |
*** YSPark has quit IRC | 05:43 | |
*** cp16net has quit IRC | 05:45 | |
*** cp16net_ is now known as cp16net | 05:45 | |
*** odyi has quit IRC | 05:51 | |
*** odyi has joined #openstack | 05:51 | |
*** odyi has joined #openstack | 05:51 | |
*** shang has joined #openstack | 05:55 | |
uvirtbot | New bug: #843066 in keystone "Unable to auth against nova with keystone enabled novaclient ..." [High,Confirmed] https://launchpad.net/bugs/843066 | 05:55 |
*** localhost has quit IRC | 06:00 | |
*** localhost has joined #openstack | 06:01 | |
*** juddm has quit IRC | 06:01 | |
*** juddm has joined #openstack | 06:01 | |
*** cp16net has quit IRC | 06:05 | |
*** jmckenty has joined #openstack | 06:12 | |
*** winston-d has quit IRC | 06:13 | |
*** hugokuo has joined #openstack | 06:15 | |
*** HugoKuo__ has quit IRC | 06:18 | |
*** Guest9399 has quit IRC | 06:29 | |
*** n8 has joined #openstack | 06:30 | |
*** n8 is now known as Guest44675 | 06:30 | |
*** Guest44675 has quit IRC | 06:34 | |
*** arBmind has joined #openstack | 06:38 | |
*** debo-os has joined #openstack | 06:41 | |
*** rnorwood has quit IRC | 06:42 | |
*** nati2_ has joined #openstack | 06:42 | |
*** nati2 has quit IRC | 06:45 | |
*** miclorb_ has quit IRC | 06:47 | |
*** dolphm has quit IRC | 06:47 | |
*** n8 has joined #openstack | 06:56 | |
*** n8 is now known as Guest48176 | 06:56 | |
*** pradeep1 has joined #openstack | 06:57 | |
*** guigui1 has joined #openstack | 07:01 | |
*** arBmind|2 has joined #openstack | 07:07 | |
*** arBmind has quit IRC | 07:07 | |
*** kaigan_ has joined #openstack | 07:07 | |
*** Guest48176 has quit IRC | 07:08 | |
*** nati2_ has quit IRC | 07:12 | |
*** mindpixel has joined #openstack | 07:15 | |
*** mikhail has joined #openstack | 07:15 | |
*** TheOsprey has joined #openstack | 07:20 | |
*** koolhead17 has quit IRC | 07:21 | |
*** pradeep1 has quit IRC | 07:25 | |
*** jkyle has joined #openstack | 07:26 | |
*** jmckenty has quit IRC | 07:30 | |
*** debo-os has quit IRC | 07:31 | |
*** dolphm has joined #openstack | 07:36 | |
*** negronjl has quit IRC | 07:39 | |
*** pradeep1 has joined #openstack | 07:40 | |
*** jkyle has quit IRC | 07:43 | |
*** foexle has quit IRC | 07:45 | |
*** dachary has joined #openstack | 07:46 | |
*** dolphm has quit IRC | 07:50 | |
*** woleium has quit IRC | 07:51 | |
uvirtbot | New bug: #843046 in keystone "Revocation of tokens" [Wishlist,Confirmed] https://launchpad.net/bugs/843046 | 07:52 |
*** arBmind|2 has quit IRC | 07:55 | |
uvirtbot | New bug: #843064 in keystone "Nova integration docs cite bogus 'ln' command ..." [Medium,Confirmed] https://launchpad.net/bugs/843064 | 07:56 |
*** dachary has quit IRC | 07:56 | |
*** bush has joined #openstack | 07:56 | |
uvirtbot | New bug: #843053 in keystone "Packaging recipes" [Low,Confirmed] https://launchpad.net/bugs/843053 | 07:57 |
*** reidrac has joined #openstack | 07:57 | |
bush | Hi, I try to use shell script to build complete OpenStack development environments from : http://devstack.org/ It failes when running /opt/stack/nova/bin/nova-manage db sync | 07:58 |
bush | nova.exception.ClassNotFound: Class Client could not be found: cannot import name deploy | 07:59 |
bush | Any suggestions/ | 07:59 |
*** halfss has joined #openstack | 08:01 | |
*** troya has quit IRC | 08:02 | |
uvirtbot | New bug: #843057 in keystone "AdminURL should not be returned on ServiceAPI (dup-of: 854104)" [High,Confirmed] https://launchpad.net/bugs/843057 | 08:12 |
*** dachary has joined #openstack | 08:18 | |
*** foexle has joined #openstack | 08:18 | |
*** redconnection has quit IRC | 08:21 | |
*** shaon has joined #openstack | 08:22 | |
*** Razique has joined #openstack | 08:22 | |
*** mikhail has quit IRC | 08:24 | |
*** rustam has joined #openstack | 08:25 | |
lzyeval | ㅠ | 08:26 |
foexle | hiho | 08:26 |
lzyeval | bush: did you install all dependencies? http://wiki.openstack.org/InstallFromSourc | 08:28 |
*** popux has joined #openstack | 08:30 | |
*** sticky has quit IRC | 08:39 | |
*** pradeep1 has quit IRC | 08:39 | |
*** sticky has joined #openstack | 08:39 | |
*** irahgel has joined #openstack | 08:41 | |
*** Guest73472 is now known as mu574n9 | 08:41 | |
*** nacx has joined #openstack | 08:41 | |
*** map_nw_ has quit IRC | 08:42 | |
*** mu574n9 is now known as Guest18588 | 08:42 | |
Razique | hey foexle | 08:42 |
Razique | what's up my friend ? :) | 08:43 |
livemoon | hi, Razique | 08:47 |
*** map_nw has joined #openstack | 08:47 | |
foexle | Razique: heyho razique :) i'm fine ... verry tired today but ok ;) ... and you ? | 08:48 |
*** koolhead11 has joined #openstack | 08:49 | |
*** shaon has quit IRC | 08:50 | |
*** Guest18588 is now known as mu574n9 | 08:51 | |
*** mu574n9 has quit IRC | 08:51 | |
*** mu574n9 has joined #openstack | 08:51 | |
*** koolhead11 has joined #openstack | 08:53 | |
koolhead11 | hi all | 08:53 |
*** guigui1 has quit IRC | 08:54 | |
*** rustam has quit IRC | 08:55 | |
livemoon | hi, kool | 08:55 |
foexle | hey koolhead11 & livemoon :) | 08:56 |
koolhead11 | hi livemoon foexle | 08:56 |
*** pradeep1 has joined #openstack | 08:57 | |
*** cmu has joined #openstack | 08:58 | |
*** jedi4ever has quit IRC | 08:59 | |
*** adiantum has joined #openstack | 09:03 | |
livemoon | I have finished my scripts install openstack | 09:04 |
foexle | great :) | 09:04 |
koolhead11 | livemoon: cool | 09:05 |
koolhead11 | livemoon: and does it uses everything from git repo | 09:05 |
*** uksysadmin has joined #openstack | 09:10 | |
*** dobber has joined #openstack | 09:11 | |
*** guigui1 has joined #openstack | 09:12 | |
*** cmu has left #openstack | 09:13 | |
*** mgoldmann has joined #openstack | 09:15 | |
livemoon | yes, according to devstack scripts | 09:15 |
koolhead11 | livemoon: so what exactly your script changes, keystone infos for Database | 09:16 |
koolhead11 | hola uksysadmin | 09:16 |
foexle | any knows when the next stable version comes out ? | 09:16 |
*** pixelbeat has joined #openstack | 09:17 | |
*** dev_sa has joined #openstack | 09:17 | |
*** shaon has joined #openstack | 09:17 | |
*** javiF has joined #openstack | 09:18 | |
uksysadmin | 'sup koolhead11 | 09:19 |
Razique | hey uksysadmin koolhead11 livemoon :) | 09:19 |
koolhead11 | uksysadmin: notthing much | 09:19 |
*** popux has quit IRC | 09:19 | |
* koolhead11 kicks Razique | 09:19 | |
koolhead11 | :D | 09:19 |
Razique | hehe | 09:19 |
koolhead11 | Razique: was looking for you once i reached hope for the docs update :D | 09:19 |
uksysadmin | word all | 09:20 |
Razique | koolhead11: tell me | 09:20 |
* uksysadmin is going all 80s skater American today | 09:20 | |
*** Razique has quit IRC | 09:20 | |
*** Razique has joined #openstack | 09:20 | |
*** foexle has quit IRC | 09:22 | |
*** foexle has joined #openstack | 09:24 | |
*** dev_sa has quit IRC | 09:25 | |
*** MarkAtwood has quit IRC | 09:31 | |
*** MarkAtwood has joined #openstack | 09:34 | |
*** dev_sa has joined #openstack | 09:35 | |
*** mrevell has joined #openstack | 09:36 | |
*** mrevell has quit IRC | 09:38 | |
*** mrevell has joined #openstack | 09:38 | |
*** pradeep1 has quit IRC | 09:39 | |
*** javiF has quit IRC | 09:39 | |
*** shaon has quit IRC | 09:41 | |
*** rustam has joined #openstack | 09:41 | |
*** katkee has joined #openstack | 09:42 | |
*** TheOsprey has quit IRC | 09:43 | |
*** alexn6 has joined #openstack | 09:45 | |
*** TheOsprey has joined #openstack | 09:46 | |
*** katkee has quit IRC | 09:47 | |
*** pradeep has joined #openstack | 09:54 | |
*** darraghb has joined #openstack | 09:56 | |
*** dysinger has joined #openstack | 09:58 | |
*** troya has joined #openstack | 10:01 | |
*** shaon has joined #openstack | 10:03 | |
* Razique slaps ChanServ around a bit with a large bass | 10:05 | |
alexn6 | Hi! can smbody say - is it ok to for example ssh from running instance back to its public ip? (flatDhcp mode, 2 nics). One can ssh back to its private adrress, and look like all correct with iptables snat, but it still isn`t possible. | 10:06 |
*** javiF has joined #openstack | 10:08 | |
*** livemoon has left #openstack | 10:10 | |
*** livemoon has joined #openstack | 10:11 | |
*** cloudgeek has joined #openstack | 10:15 | |
*** jantje_ has quit IRC | 10:23 | |
*** jantje has joined #openstack | 10:23 | |
*** livemoon has left #openstack | 10:26 | |
*** littleidea has joined #openstack | 10:33 | |
*** ccorrigan has joined #openstack | 10:37 | |
*** supriya has quit IRC | 10:37 | |
foexle | alexn6: hey, i'm sorry i don't understand what you mean :>, do you try to get a ssh login to another instance with the backnet ip's ? | 10:37 |
*** corrigac has joined #openstack | 10:40 | |
*** ccorrigan has quit IRC | 10:42 | |
lionel | hello. Is there any documentation/tutorial on using multiple nic in nova? | 10:44 |
*** supriya has joined #openstack | 10:44 | |
alexn6 | foexle: I want to ssh from instance back to itself but on public IP (I add IP by euca-assosiate), it`s ok when sshing back on its private address | 10:45 |
*** tryggvil has quit IRC | 10:47 | |
foexle | alexn6: why do you do that ? °° ssh to localhost ? .... I'm not sure what your use case is .... so i can't give a correctly answer .... but yes you can ssh login to the same instance | 10:51 |
*** mrevell has quit IRC | 10:51 | |
*** dev_sa has quit IRC | 10:52 | |
*** mrevell has joined #openstack | 10:53 | |
*** dev_sa has joined #openstack | 10:56 | |
alexn6 | foexle: in my case I cannot do so and don`t understand why. We have service that access resources on VM by public IP. | 10:58 |
foexle | so you cant login via public ip to your vm ? | 11:00 |
foexle | or only from backnet to public ip? | 11:01 |
alexn6 | foexle: what exactly I want - nova-network on v.v.v.1, VM on private v.v.v.2 has real IP r.r.r.r. I go to VM by ssh on real or private IP. and then form VM go to it again via real IP(via private it`s ok) | 11:01 |
foexle | ah yeah .... you need a extra nic in each vm | 11:01 |
foexle | normally you have a default route on your host server to access public ips | 11:02 |
*** lzyeval has quit IRC | 11:02 | |
foexle | can you use domains instead of ips? | 11:03 |
*** ahasenack has joined #openstack | 11:05 | |
*** ollie1 has joined #openstack | 11:09 | |
*** JesperA has joined #openstack | 11:09 | |
alexn6 | why them better? | 11:11 |
alexn6 | possibly not | 11:11 |
foexle | you can simply use your etc/hosts file | 11:12 |
alexn6 | for what extra nic? are you sure? | 11:12 |
*** katkee has joined #openstack | 11:12 | |
foexle | alexn6: no not sure :) ... this use case havn't heared before :) | 11:13 |
alexn6 | foexle: can you check on your installation? | 11:15 |
uvirtbot | New bug: #894333 in nova "Data Loss in VM if the vm is created from snapshot(seen this happening often)" [Undecided,New] https://launchpad.net/bugs/894333 | 11:15 |
foexle | not possible atm :) .... i move the complete system to production hw .... so i have atm a running cloud | 11:16 |
*** brainsteww has joined #openstack | 11:17 | |
alexn6 | and? you just need some running linux VM in cloud | 11:17 |
foexle | i dont have 1 | 11:17 |
*** PotHix has joined #openstack | 11:20 | |
*** mnour has joined #openstack | 11:27 | |
*** dysinger has quit IRC | 11:31 | |
*** uksysadmin has quit IRC | 11:31 | |
*** bush has quit IRC | 11:32 | |
*** katkee has quit IRC | 11:36 | |
*** dysinger has joined #openstack | 11:39 | |
*** foexle has quit IRC | 11:41 | |
*** foexle has joined #openstack | 11:41 | |
*** katkee has joined #openstack | 11:46 | |
*** guigui1 has quit IRC | 11:48 | |
*** Razique has quit IRC | 11:48 | |
*** halfss has quit IRC | 11:48 | |
uvirtbot | New bug: #894323 in nova "Nova API exposes hostId to non-admin" [Undecided,New] https://launchpad.net/bugs/894323 | 11:56 |
*** livemoon has joined #openstack | 11:56 | |
*** yshh has joined #openstack | 12:02 | |
*** HugoKuo_ has joined #openstack | 12:12 | |
zykes- | anyone have knowhows on products for frontends for swift? | 12:14 |
*** hugokuo has quit IRC | 12:16 | |
*** rsampaio has joined #openstack | 12:23 | |
*** MarkAtwood has quit IRC | 12:24 | |
*** cereal_bars has joined #openstack | 12:24 | |
*** abecc has joined #openstack | 12:25 | |
zykes- | notmyname: here ? | 12:31 |
*** littleidea has quit IRC | 12:36 | |
*** JStoker has quit IRC | 12:37 | |
*** rsampaio has quit IRC | 12:38 | |
*** zz_bonzay is now known as bonzay | 12:40 | |
*** JStoker has joined #openstack | 12:40 | |
*** supriya has quit IRC | 12:42 | |
*** bonzay is now known as zz_bonzay | 12:42 | |
zykes- | anyone here doing stuff with swift ? | 12:43 |
reidrac | yeep | 12:44 |
zykes- | what servers are you using ? | 12:44 |
*** hugokuo has joined #openstack | 12:45 | |
reidrac | servers? do you mean hardware? | 12:48 |
zykes- | correct | 12:48 |
*** _rfz has quit IRC | 12:49 | |
reidrac | 4U 2 x Quad Xeon with 24 disks | 12:50 |
reidrac | that's for each storage node | 12:50 |
zykes- | what's the price for one of those? | 12:51 |
reidrac | I don't have that information | 12:51 |
zykes- | ah, doh ;) | 12:51 |
zykes- | you know which server model ? | 12:51 |
reidrac | you can look for "4U 2 x Quad Xeon with 24 disks" in google | 12:52 |
zykes- | dells ? | 12:52 |
reidrac | I'm not in ops, I don't deploy the hw :) | 12:53 |
reidrac | not sure, we work with other providers | 12:53 |
*** Razique has joined #openstack | 12:54 | |
jasona | hmm | 12:54 |
jasona | anyone done a RFQ for openstack supported storage for swift ? | 12:55 |
*** Razique has quit IRC | 12:55 | |
*** Razique has joined #openstack | 12:55 | |
zykes- | rfq ? | 12:55 |
jasona | request for quote.. also around request for proposal or request for information | 12:55 |
jasona | i.e, if you wanted to go buy something and wanted to give vendors a list of things they had to do | 12:56 |
zykes- | i wonder what hardware i would need | 12:56 |
jasona | there's a few suggestions in the openstack doco but wondering if anyone has gone through this recently. | 12:56 |
zykes- | firstly for a starter setup | 12:56 |
zykes- | :p | 12:56 |
reidrac | zykes-: you can have swift all in one machine | 12:56 |
zykes- | isn't that a bit risky ? | 12:56 |
reidrac | it's a test setup | 12:57 |
reidrac | how many zones do you want to implement? | 12:57 |
zykes- | firstly 1 i guess | 12:57 |
zykes- | i mean 1 server | 12:57 |
zykes- | to see that it works | 12:57 |
reidrac | then is all in one | 12:57 |
reidrac | you need at least 3 zones if you want to use 3 replicas | 12:57 |
zykes- | yeah, that means 3 groups of drives | 12:58 |
*** brainsteww has quit IRC | 12:58 | |
zykes- | can't be done with 2 zones ? ;p | 12:58 |
reidrac | you said: zykes-: isn't that a bit risky ? | 12:58 |
reidrac | :) | 12:58 |
zykes- | heh | 12:58 |
reidrac | have you read the docs? | 12:58 |
zykes- | yeah | 12:59 |
reidrac | I see | 12:59 |
guaqua | you can run with 2 replicas, 2 servers | 12:59 |
hugokuo | zykes , 1 pm , 5 disks | 12:59 |
guaqua | the problem is, if 1 server dies, it's read-only | 12:59 |
hugokuo | zykes , two for system using RAID | 12:59 |
guaqua | so it's basically down then | 12:59 |
hugokuo | three disks for 3 zones | 12:59 |
guaqua | the data is intact, but it cannot operate | 12:59 |
hugokuo | if that's deployment is only for personal using. it would be fine | 13:00 |
Razique | hey hugokuo | 13:00 |
zykes- | but say 3*1 tb in 3 zones | 13:00 |
hugokuo | Razique , bonjour | 13:00 |
zykes- | then you only have 1 tb of capacity ? | 13:00 |
Razique | :) | 13:00 |
hugokuo | zykes , yup | 13:00 |
livemoon | hi,all | 13:01 |
Razique | hey zykes- | 13:01 |
Razique | livemoon: =- | 13:01 |
Razique | :) | 13:01 |
zykes- | how much memory does one need for say a server that has 12*2tb drives and 1 quad core ? | 13:01 |
Razique | zykes-: for which usage ? | 13:02 |
zykes- | swift | 13:02 |
Razique | ah ok | 13:02 |
hugokuo | zykes , for which swift worker ? | 13:02 |
Razique | wouldn't misguide :) | 13:02 |
zykes- | isn't a quad core enough then ? | 13:02 |
zykes- | hugokuo: storage node | 13:03 |
hugokuo | zykes , who will you use it ? | 13:03 |
hugokuo | internal using or ? | 13:03 |
zykes- | hugokuo: web customers and archive system | 13:03 |
zykes- | but not "heavy" public usage | 13:03 |
hugokuo | I think it would be enough for a "storage node" | 13:04 |
zykes- | how much hugokuo memory | 13:04 |
hugokuo | around 8GB ram …. but I did not test it though | 13:04 |
hugokuo | on of my deployment of swift with 6 storage node (desktop 4 core 16 G) | 13:06 |
*** guigui has joined #openstack | 13:06 | |
hugokuo | the loading of each storage is not that high | 13:06 |
hugokuo | my memory usage never over 1GB | 13:06 |
hugokuo | you might interesting about the HW spec of backspace recommendation | 13:07 |
troya | hi All | 13:07 |
hugokuo | zykes , http://www.referencearchitecture.org/ | 13:09 |
*** mjfork has joined #openstack | 13:14 | |
*** uksysadmin has joined #openstack | 13:15 | |
*** hugokuo has quit IRC | 13:16 | |
*** deshantm_laptop has joined #openstack | 13:25 | |
*** praefect has quit IRC | 13:26 | |
*** praefect has joined #openstack | 13:26 | |
*** dev_sa has quit IRC | 13:26 | |
zykes- | hmmm, i wonder on how many hours one could count for a "basic" cluster | 13:26 |
zykes- | with 3 zones | 13:26 |
jasona | hmm | 13:30 |
jasona | so been reading the commentary | 13:30 |
jasona | and back to original question | 13:30 |
jasona | anyone had to buy this stuff and written or got access to a rfp/rfq/rfi ? :) | 13:30 |
jasona | i'm particularly interested in how you asked vendors to supply storage around what nova needs, vs what swift needs. | 13:31 |
jasona | hmm, quiet. :) | 13:33 |
*** osier has quit IRC | 13:35 | |
JesperA | Hmm, in http://www.referencearchitecture.org/ it suggests a Dell C2100 as a storage node, why not use a Dell R510/R515 for that? Much cheaper | 13:35 |
zykes- | i'm looking at a single quad core 16 disk box with 8 gig ram | 13:37 |
*** shaon has quit IRC | 13:37 | |
zykes- | for storage nodes now | 13:37 |
JesperA | zykes- a supermicro box? | 13:38 |
reidrac | we're using 16GB in our storage nodes, but it looks like they're using around 4GB | 13:38 |
zykes- | JesperA: yes | 13:39 |
zykes- | how come ? | 13:39 |
zykes- | JesperA: why a R510/515 ? | 13:39 |
JesperA | zykes- because i am thinking about that too | 13:39 |
reidrac | it would be really useful knowing some figures :) | 13:39 |
zykes- | reidrac: that's what i'm investigating | 13:39 |
zykes- | currently i'm looking at 12-24 disk nodes | 13:39 |
zykes- | single processes | 13:39 |
zykes- | processor | 13:39 |
JesperA | zykes- will you be using a separate proxy server? | 13:41 |
jasona | thanks jespter. the reference arch is useful | 13:43 |
zykes- | JesperA: unsure yet | 13:46 |
*** abecc has quit IRC | 13:46 | |
jasona | jesper: the reference arch covers nova and bits of swift | 13:54 |
*** hadrian has joined #openstack | 13:54 | |
jasona | but i am not quite seeing the nova storage bits exactly. hmm | 13:54 |
JesperA | In the example it uses a Dell MD3200i, but it all depends how much storage is needed | 13:55 |
zykes- | for what JesperA swift? | 13:56 |
JesperA | nope, nova | 13:56 |
jasona | hmm. | 14:02 |
cloudgeek | Hi all | 14:04 |
*** pradeep has quit IRC | 14:05 | |
*** katkee has quit IRC | 14:08 | |
foexle | Razique: do you know when the next stable version planed is ? | 14:08 |
Razique | foexle: yah | 14:08 |
foexle | jan 2012 ? ;) | 14:09 |
Razique | April the 5th | 14:09 |
foexle | oh ok | 14:09 |
Razique | essex 2012.1 | 14:09 |
Razique | :) | 14:09 |
foexle | ah k :> | 14:10 |
foexle | april ... with the new lts version ^^ | 14:11 |
zykes- | next stable release no is 2012.3 Razique ? | 14:11 |
zykes- | 2012.1 is already out | 14:11 |
zykes- | i thought | 14:11 |
zykes- | or how is that versioning stuff again | 14:11 |
foexle | stable = 2011.3 (diablo) | 14:11 |
*** corrigac has quit IRC | 14:12 | |
Razique | zykes-: I just checked OpenStack 2012.12012-04-05 not yet released | 14:12 |
zykes- | ok | 14:12 |
*** chemikadze has quit IRC | 14:13 | |
livemoon | Razique: have you used essxe-1? | 14:15 |
Razique | livemoon: not at all, I'm still using diablo stable =d | 14:15 |
Razique | never tried trunk | 14:15 |
Razique | what about you livemoon ? | 14:15 |
livemoon | so do I | 14:15 |
livemoon | bye | 14:16 |
*** debo-os has joined #openstack | 14:16 | |
livemoon | I am ready to reader my kindle | 14:16 |
*** redconnection has joined #openstack | 14:17 | |
*** livemoon has left #openstack | 14:19 | |
*** rods has joined #openstack | 14:19 | |
*** dev_sa has joined #openstack | 14:21 | |
*** _rfz has joined #openstack | 14:24 | |
zykes- | swift uses JBOD no ? | 14:24 |
*** katkee has joined #openstack | 14:24 | |
*** dubenstein has joined #openstack | 14:26 | |
dubenstein | hi #openstack | 14:26 |
zykes- | JesperA: what kind of disks are you on ? | 14:26 |
Glacee | zykes: it is recommended not to use RAID | 14:26 |
Glacee | for objects/accounts/containers | 14:26 |
zykes- | Glacee: i know | 14:26 |
JesperA | zykes- i have not decided, i am also in the planning stage | 14:27 |
dubenstein | «glance add» is taking too long, bursting machine load to maximum, have someone experienced an issue like that ? | 14:27 |
*** debo-os has quit IRC | 14:27 | |
jasona | hmm | 14:28 |
*** debo-os has joined #openstack | 14:28 | |
*** deshantm_laptop has quit IRC | 14:28 | |
jasona | jesper: if you are planning, do you have any docs yet ? | 14:28 |
jasona | i just finished v0.1 of a procurement spec, really looking to see what other people are specifying also :) | 14:29 |
Glacee | zykes: yeah.. I just went up the channel to read previous conversation :0 | 14:29 |
zykes- | hmmmm | 14:29 |
zykes- | i don't think this can be right | 14:30 |
JesperA | jasona but you are looking at a Nova cluster, right? | 14:30 |
Glacee | honestly.. the best bang for bucks right now that I found.. is using 36disks box in 4U | 14:31 |
jasona | jesper: no, the whole shebang | 14:31 |
jasona | i need to specify nova compute servers (for lots of VMs), nova storage, swift storage, glance servers and anything else i need | 14:31 |
zykes- | Glacee: what controllers you doing ? | 14:31 |
Glacee | 3ware 24ports controllers | 14:32 |
Glacee | not using expanders | 14:32 |
zykes- | ah | 14:32 |
*** rsampaio has joined #openstack | 14:32 | |
zykes- | Glacee: i'm doing lsi jbod controllers (8 port) in a 16 slots bo | 14:32 |
zykes- | box | 14:32 |
jasona | also separately specifying big data storage to run alongside swift. | 14:32 |
JesperA | Glacee http://farm7.staticflickr.com/6062/6074472208_aaeafd80bd_b.jpg | 14:32 |
zykes- | jasona: care to share the spec or confidential? | 14:32 |
JesperA | jasona ok i think you are further into the planning stage than i am | 14:32 |
jasona | zykes: willing to share with community down the track but can't share it for a few days. i.e it does have to go to vendors first | 14:33 |
zykes- | ah | 14:33 |
Glacee | jespera: yeah backblaze.. thats one thing we consider but | 14:33 |
jasona | equipment order planned within a few weeks :) | 14:33 |
Glacee | be careful for CPU/RAM | 14:33 |
zykes- | i ended up with about | 14:33 |
zykes- | 10k $ for 16*2tb | 14:33 |
zykes- | pr storage node | 14:33 |
jasona | jesper: maybe. i need to buy kit in 2-3 weeks. must have cluster up in next 8 weeks or so. | 14:34 |
Glacee | at least just put objects on backblaze.. even then | 14:34 |
Glacee | the CPU by default is propably too low | 14:34 |
zykes- | Glacee: which one ? | 14:34 |
Glacee | zykes: ok not bad.. the box we have.. is around 16k for 66*3TB | 14:34 |
zykes- | 66*3tb ? | 14:34 |
zykes- | 1 box ? | 14:34 |
Glacee | sorry 33*3tB | 14:35 |
jasona | glaceee: you looked at dells stuff or just found better bang/buck via others ? | 14:35 |
zykes- | only bad thing is that harddrive prices is shitless expensive atm | 14:35 |
Glacee | others jasonA | 14:35 |
zykes- | Glacee: you got a spec or ? | 14:35 |
jasona | also, anyone comments on the 4T 2.5" drives shipping next year ? | 14:35 |
zykes- | 4T 2.5 ? | 14:35 |
zykes- | damned | 14:35 |
Glacee | hmm not handy... but ask if you want something specific | 14:35 |
Glacee | jasona: they will propably be too expensive | 14:36 |
Glacee | jasona: are you buying consume grade drves for your backblaze pod? | 14:36 |
Glacee | consumer* | 14:36 |
zykes- | Glacee: server model etc | 14:36 |
zykes- | backblaze ? | 14:36 |
jasona | glacee: not building backblaze pods but am expecting to use near consumer drives though | 14:36 |
jasona | i.e either sata, or preferred, NLSAS. | 14:36 |
Glacee | oh ok.. the picture you sent was backblaze :) jasona | 14:36 |
JesperA | it was me Glacee | 14:37 |
jasona | didn't send a pic :) | 14:37 |
jasona | jesper did! | 14:37 |
zykes- | what's backblace ? | 14:37 |
Glacee | ohh ok lol.. jasonA.. Jespera wow | 14:37 |
zykes- | blaze | 14:37 |
Glacee | confusing names :) | 14:37 |
jasona | look at openstoragepod.org or something like that zykes | 14:37 |
jasona | glacee. zykes. they're like practically identical also! :) | 14:37 |
JesperA | zykes- http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/ | 14:37 |
jasona | i would _like_ to have build a BBP as part of this project but well, not really enough money in it | 14:38 |
zykes- | bbp ? | 14:38 |
jasona | backblazepod | 14:38 |
Glacee | the thing with consume drivers.. you need to make sure that you have some anti-vibration mechansiim in place | 14:38 |
Glacee | or you will have a lot of failures | 14:38 |
jasona | glacee: you mean.. a whole packet of rubber bands with each drive ?? :) | 14:38 |
Glacee | entreprise grade drives have anti-vibration features in them | 14:39 |
Glacee | jasona: yeah :) | 14:39 |
Glacee | thats one option | 14:39 |
zykes- | what drives should one use ? | 14:39 |
zykes- | hitachi ultrastar? | 14:40 |
jasona | well, in a world where money is no limit | 14:40 |
Glacee | depends on your use case :0 | 14:40 |
jasona | all SSD zykes! :) | 14:40 |
zykes- | but ultrastar is usable ? | 14:40 |
*** rbp has joined #openstack | 14:40 | |
Glacee | zykes: why not | 14:41 |
*** praefect_ has joined #openstack | 14:41 | |
*** deshantm_laptop has joined #openstack | 14:41 | |
Glacee | wer looking to modify backblaze pods.. to get more CPU/RAM and test it with swift | 14:42 |
Glacee | well see how it goes :0 | 14:42 |
zykes- | hmm | 14:42 |
zykes- | funny | 14:42 |
zykes- | 7746,363636363636$ for 32 tb | 14:42 |
zykes- | not bad at all | 14:42 |
JesperA | Glacee well, its really easy to just buy a more powerfull cpu, so modify? | 14:42 |
Glacee | JesperA the board they use have limitation on the CPU you can use if I can remember | 14:43 |
Glacee | and its pretty low | 14:43 |
zykes- | isn't that pretty decent pricing ? | 14:43 |
*** praefect has quit IRC | 14:43 | |
zykes- | i wonder if i can throw in a 24-32 slot chassis | 14:43 |
Glacee | zykes:its alright | 14:43 |
zykes- | and make it even more powerful within 15K | 14:44 |
Glacee | depends of your use case | 14:44 |
Glacee | do you really need 15k for object storage? | 14:44 |
zykes- | Glacee: in $ the cost of each storage node | 14:44 |
Glacee | ohh within 15k $ sorry lol | 14:44 |
zykes- | i wanna see if i can double the amount of drives with chassis and controllers etc within 15 | 14:45 |
coli | you would be suprised that 7.2k rpm sata drives are more then often much faster than 15k sas drives | 14:45 |
JesperA | yeah you are right Glacee the most powerfull cpu on the motherboard they are using is: http://ark.intel.com/products/48501/Intel-Xeon-Processor-X3480-(8M-Cache-3_06-GHz) | 14:46 |
*** rsampaio has quit IRC | 14:50 | |
JesperA | Well, i am not sure about port multipliers, cant squeeze out to much speed from those i guess | 14:50 |
Glacee | depends on your use case.. backblaze pods.. usually get filled for backups and then they do nothing | 14:50 |
*** shaon has joined #openstack | 14:51 | |
JesperA | yeah, fits perfect for them, but i have more reads than writes | 14:51 |
*** dubenstein has quit IRC | 14:51 | |
zykes- | http://pastebin.com/N8NWus8h | 14:52 |
zykes- | that's what i'm at atm | 14:52 |
*** dubenstein has joined #openstack | 14:53 | |
zykes- | but damned | 14:54 |
zykes- | i wasn't clear of that my motherboard had 14 connectors | 14:54 |
zykes- | means i can scale down on that solution to 1*4 port extra | 14:54 |
JesperA | what currency is that? | 14:54 |
zykes- | NOK | 14:54 |
zykes- | divide it by 5.5 for $ | 14:54 |
Glacee | seems like a crazy currency.. 9000$ for a chassis | 14:55 |
JesperA | ok | 14:55 |
zykes- | it's not glace | 14:55 |
zykes- | it's in norwegian kroners pre vat | 14:56 |
Glacee | oh ok | 14:56 |
JesperA | yeah, here in Sweden it would cost something like 13000 | 14:56 |
zykes- | 1551 $ | 14:56 |
royh | zykes-: hi there :) | 14:56 |
zykes- | royh: ? i know you or ;p | 14:57 |
royh | zykes-: nope. but I use the same currency as you :P | 14:58 |
*** katkee has quit IRC | 14:58 | |
*** dev_sa has quit IRC | 15:00 | |
*** pradeep has joined #openstack | 15:02 | |
dubenstein | «glance add» is taking too long, bursting machine load to maximum, have someone experienced an issue like that ? | 15:05 |
*** dev_sa has joined #openstack | 15:05 | |
dubenstein | trying to «glance add» oneiric-server-cloudimg-amd64.img, it's 1.4G | 15:06 |
dubenstein | no swift banckend | 15:07 |
zykes- | anyone here got knowhows on the Dell DX platform ? | 15:07 |
*** katkee has joined #openstack | 15:10 | |
*** kaigan_ has quit IRC | 15:16 | |
*** guigui has quit IRC | 15:16 | |
*** dev_sa has quit IRC | 15:23 | |
*** guigui1 has joined #openstack | 15:29 | |
*** alekibango has quit IRC | 15:34 | |
*** alekibango has joined #openstack | 15:34 | |
*** foexle has quit IRC | 15:40 | |
*** debo-os has quit IRC | 15:42 | |
*** wariola has quit IRC | 15:49 | |
*** debo-os has joined #openstack | 15:50 | |
*** wariola has joined #openstack | 15:50 | |
uvirtbot | New bug: #894431 in nova "linux_net ovsinterfacedriver is setting the wrong iface-id" [Undecided,Confirmed] https://launchpad.net/bugs/894431 | 15:51 |
*** o86 has joined #openstack | 15:53 | |
*** o86 has left #openstack | 15:53 | |
*** uksysadmin has quit IRC | 15:54 | |
*** dragondm has quit IRC | 15:54 | |
*** 36DAAU429 has quit IRC | 15:54 | |
troya | hi all | 15:58 |
*** cloudgeek has quit IRC | 15:59 | |
troya | hi zykes | 15:59 |
*** TheOsprey has quit IRC | 16:01 | |
*** mindpixel has quit IRC | 16:05 | |
*** reidrac has quit IRC | 16:07 | |
*** koolhead11 has quit IRC | 16:13 | |
*** freeflyi1g has quit IRC | 16:13 | |
*** cloudgeek has joined #openstack | 16:14 | |
*** freeflying has joined #openstack | 16:15 | |
*** yshh has quit IRC | 16:16 | |
coli | anybody knows what happens if tenant has many instances and he runs out of ip addresses in fixed_ip range assigned to him ? | 16:19 |
_rfz | coli, once the ip are all used up, the next VM you try spin up will fail - with an error something like "no more fixed IP's to lease" | 16:21 |
*** guigui1 has quit IRC | 16:23 | |
*** troya has quit IRC | 16:24 | |
*** shaon has quit IRC | 16:27 | |
*** jkyle has joined #openstack | 16:28 | |
*** n8 has joined #openstack | 16:35 | |
*** n8 is now known as Guest68173 | 16:35 | |
zykes- | anyone tell me what a bastion server is ? | 16:36 |
*** shaon has joined #openstack | 16:38 | |
jkyle | when I do a nova-manage floating list I get output like: <hostname> <ip_address>: None | 16:38 |
jkyle | does the 'None' mean this ip has not been allocated | 16:38 |
*** gerry__ has joined #openstack | 16:39 | |
*** dobber has quit IRC | 16:40 | |
*** koolhead17 has joined #openstack | 16:45 | |
*** TheOsprey has joined #openstack | 16:47 | |
*** debo-os has quit IRC | 16:47 | |
*** pradeep has quit IRC | 16:54 | |
*** debo-os has joined #openstack | 16:54 | |
*** JesperA is now known as c014 | 16:54 | |
*** chemikadze has joined #openstack | 16:56 | |
*** c014 has quit IRC | 16:58 | |
*** Jeppelelle^aw has joined #openstack | 17:01 | |
*** jkyle has quit IRC | 17:01 | |
*** JesperA has joined #openstack | 17:02 | |
*** bryguy has quit IRC | 17:03 | |
*** cereal_bars has quit IRC | 17:03 | |
*** bryguy has joined #openstack | 17:04 | |
*** alexn6 has left #openstack | 17:06 | |
*** oonersch has joined #openstack | 17:06 | |
*** jkyle has joined #openstack | 17:08 | |
*** jkyle has joined #openstack | 17:09 | |
*** rbp has quit IRC | 17:09 | |
*** crescendo has quit IRC | 17:11 | |
*** dysinger has quit IRC | 17:12 | |
*** woleium has joined #openstack | 17:15 | |
*** mrevell has quit IRC | 17:15 | |
*** jkyle has quit IRC | 17:19 | |
*** tryggvil_ has joined #openstack | 17:20 | |
_rfz | On a FlatDHCPManager network is possible to be ping the internal Ip of controller and compute nodes? | 17:20 |
*** katkee has quit IRC | 17:23 | |
zykes- | anyone here good at supermicro hardware? | 17:26 |
*** mnour has quit IRC | 17:30 | |
*** MarkAtwood has joined #openstack | 17:32 | |
coli | _rfz: yes | 17:32 |
*** Razique has quit IRC | 17:32 | |
coli | zykes-: what do you require ? we have full room of these | 17:32 |
coli | _rfz: I revert my answer. what do you mean by "FlatDHCPManager network" ? fixed_ip network ? | 17:35 |
*** pixelbeat has quit IRC | 17:35 | |
*** shaon has quit IRC | 17:36 | |
*** shaon has joined #openstack | 17:39 | |
*** jkyle has joined #openstack | 17:39 | |
*** irahgel has left #openstack | 17:42 | |
zykes- | coli: hardware for swift | 17:42 |
vidd-away | coli, i would assume he means "i have flat networking with dhcp set up and not vlan" | 17:42 |
coli | vidd: evening ;-) | 17:44 |
coli | vidd: by default nova is set to "vlan" mode for networking, how can I check what I have setup ? (there is no hint i nova.conf as I can see, and I have installed from Kiall scripts) | 17:45 |
coli | zykes-: there are cases 3 or 4U high which hold 28-32 drives we use them. | 17:45 |
coli | zykes: plus 3Ware controlers, new 3750 ones just rock, very fast. | 17:46 |
vidd-away | hello coli if you did not specificaslly set up flat networking, then you have vlan | 17:46 |
zykes- | 3wares for jbod ? | 17:46 |
coli | zykes: what do you mean by jbod ? we put everything in one case, which I have mentioned. | 17:47 |
* vidd-away has to go drive halfway accross the state for thanksgiving dinner =[ | 17:47 | |
zykes- | coli: but you don't raid disks no ? ... | 17:47 |
vidd-away | have fun y'all | 17:47 |
Jeppelelle^aw | coli do you have any whitepapers on the 3750? Cant seem to find them, not out in public yet? | 17:48 |
*** Jeppelelle^aw is now known as JesperA | 17:48 | |
zykes- | i can't even find the controller here | 17:48 |
*** pixelbeat has joined #openstack | 17:49 | |
coli | jepp: i'm wondering if I have made a mistake with the model, just a sec | 17:49 |
coli | 3ware 9750.... sorry | 17:50 |
*** bamedro has joined #openstack | 17:50 | |
coli | we have connected bunch of kingston hyperx ssd drives, it was blazing fast | 17:51 |
zykes- | that's a 8 port controller ? | 17:51 |
coli | never seen anything that quick when it comes to disk arrays | 17:51 |
coli | there are 16 and 24 port versions | 17:51 |
coli | we use two 16 port per case | 17:52 |
zykes- | coli: with a expander backplane | 17:52 |
zykes- | can't you run a 2*8087 connectors with 6 gbit for all disks for sata ? | 17:52 |
zykes- | for 24 disks | 17:52 |
coli | zykes: I haven't seen them physicly, so possible it's with expander | 17:53 |
coli | zykes: we had 6 or 8 kingston hyperx ssd drives and all were connected at 6gbit | 17:53 |
coli | zykes: afair they had to change the cables in the case as they were unreliable and speeds were slow, after changing to new cables from backplane to card (I think foxconn cables were used) it was working fine. | 17:55 |
zykes- | 10 gigabit network coli ? | 17:55 |
*** dachary has quit IRC | 17:55 | |
coli | zykes: for storage main nodes, yes. | 17:55 |
zykes- | ok | 17:57 |
coli | zykes: tests with kingston hyperx ssd drives were done in order to saisfy our curiosity ;-) | 17:57 |
JesperA | coli what make/model of switches are you using for 10Gbit? | 17:58 |
coli | asfair cisco 48xx or 49xx, I will check but I think it begins with 4... ;-) | 17:58 |
zykes- | not force10 ? | 17:59 |
*** _rfz has quit IRC | 18:00 | |
JesperA | Yeah the hardware recomendation suggest a 49xx: http://www.referencearchitecture.org/hardware-specifications/ | 18:00 |
zykes- | JesperA: doesn't mean something else won't work | 18:01 |
zykes- | does one need to use 10gigabit for replication net ? | 18:02 |
Glacee | arent 10gb cisco super expensive? | 18:03 |
JesperA | what cisco gear isnt? :D | 18:03 |
zykes- | JesperA: that's for aggregate switches | 18:03 |
zykes- | if you've got lots o racks | 18:03 |
zykes- | we're gonna start at 3 nodes | 18:04 |
zykes- | most likely | 18:04 |
zykes- | so no need for that | 18:05 |
*** debo-os has quit IRC | 18:05 | |
*** deshantm_laptop has quit IRC | 18:05 | |
coli | I can see c49xx and some Brocade switches | 18:05 |
*** jkyle has quit IRC | 18:05 | |
zykes- | coli what about force10 ? | 18:07 |
zykes- | hp | 18:07 |
coli | zykes: I know they exist :-) we don't use them | 18:07 |
coli | zykes: personally I have very bad experience (end of '90s) with hp switches, however as far I'm told they have moved forward a lot ;-) | 18:08 |
JesperA | coli, what is the reason behind the recommended hundreds of partitions per drive in swift? | 18:08 |
coli | zykes: if you are starting with 3 nodes why do you worry ? | 18:08 |
coli | JesperA: no idea. I didn't touch swift yet. still working out how to use nova. | 18:09 |
JesperA | oh ok | 18:09 |
zykes- | coli: i ain't | 18:09 |
zykes- | we're starting with swift if it's going to be at all | 18:09 |
*** jkyle_ has joined #openstack | 18:10 | |
coli | zykes: just out of curiosity any particular business reason to start with swift ? public or private use ? | 18:10 |
coli | by public I mean a lot of small users. | 18:11 |
zykes- | private for customers atm | 18:11 |
zykes- | ehm, so it becomes "both" | 18:11 |
zykes- | really | 18:11 |
zykes- | isn't a server of 24 disks 24 nodes ? | 18:12 |
zykes- | in swift | 18:12 |
zykes- | server_ip + disk | 18:13 |
zykes- | = node | 18:13 |
*** redconnection has quit IRC | 18:14 | |
notmyname | JesperA: swift partitions != disk partitions. swift partition == logical keyspace partition used to balance data throughout the cluster | 18:14 |
zykes- | notmyname: is my question "correct"? | 18:15 |
JesperA | oh ok that makes much more sence | 18:15 |
*** adiantum has quit IRC | 18:15 | |
notmyname | zykes-: sorry didn't see it | 18:15 |
notmyname | zykes-: 10gb for replicate? | 18:15 |
zykes- | notmyname: no > $node = $server + $device | 18:16 |
Glacee | notmyname: I think you mentioned that swift keeps all the disks operating at all time? is this true or I am mistaken.. and if yes, is it because of the auditors/replicators? | 18:16 |
notmyname | normally the way I use "storage node" is as the box running the particular processes (container, account, object). or perhaps, if you deploy this way, the controller server + the JBODs | 18:17 |
coli | Glacee: what do you mean by "disks operating at all time" ? | 18:17 |
notmyname | zykes-: I think of storage volume == IP + port + mount point | 18:17 |
notmyname | Glacee: yes. the disk spin all the time because of auditors, replicators, and also because new data can go to any disk | 18:18 |
zykes- | notmyname: same thing then anyways | 18:18 |
zykes- | if you do 1 disk pr mount point ? | 18:18 |
Glacee | notmynam: thanks | 18:19 |
notmyname | zykes-: "mount point" is more generic. eg you could do a RAID 10 volume | 18:19 |
*** oubiwann has quit IRC | 18:19 | |
zykes- | yeah | 18:19 |
*** nacx has quit IRC | 18:20 | |
notmyname | gotta go | 18:20 |
zykes- | darn | 18:20 |
notmyname | I'll be back later :-) | 18:20 |
*** rustam has quit IRC | 18:22 | |
*** darraghb has quit IRC | 18:22 | |
*** oonersch has quit IRC | 18:23 | |
zykes- | anyone here up for some economics on swift? | 18:25 |
JesperA | 10 billion dollars | 18:25 |
JesperA | :) | 18:25 |
zykes- | ;p | 18:26 |
zykes- | if i should calculate the price for gb/month of a swift zone | 18:26 |
zykes- | what formula would that be ? | 18:26 |
guaqua | zykes-: i was doing the same thing just today | 18:26 |
guaqua | it depends on the replica count, amount of zones | 18:27 |
guaqua | what kind of hardware you have | 18:27 |
zykes- | supermicro | 18:27 |
guaqua | you have to figure out how much your hardware costs | 18:28 |
zykes- | that i know | 18:28 |
guaqua | and after that, how much it costs for you to run it | 18:28 |
zykes- | i don't have the "run" costs | 18:29 |
zykes- | atm | 18:29 |
zykes- | i have the buy cost pr gb pr month | 18:29 |
guaqua | operate would be a better term for it | 18:29 |
zykes- | https://docs.google.com/spreadsheet/ccc?key=0AufFjyusNdg4dGttdmtCY3BHR21aU0ZIcDBwcmoxRlE#gid=0 | 18:29 |
guaqua | i guess swift-specific is just how many zones and how many replicas | 18:30 |
zykes- | https://docs.google.com/spreadsheet/ccc?key=0AufFjyusNdg4dGttdmtCY3BHR21aU0ZIcDBwcmoxRlE sorry is the link | 18:30 |
zykes- | what's the diff on a zone and replica? | 18:30 |
guaqua | ahh, nice | 18:30 |
*** lionel has quit IRC | 18:30 | |
guaqua | replica count means how many zones a given file is | 18:30 |
guaqua | if you have 3 zones and 3 replicas, it's on all of them | 18:31 |
zykes- | that doesn't count though for the cost of hardware ? | 18:31 |
zykes- | i'm working on the "small" setup atm | 18:31 |
guaqua | depends on how you set it up | 18:31 |
zykes- | care to help? | 18:31 |
*** dolphm has joined #openstack | 18:31 | |
*** lionel has joined #openstack | 18:32 | |
zykes- | i just need to find out the price for a zone pr month | 18:32 |
guaqua | i'm doing something similar myself | 18:32 |
*** gerry__ has quit IRC | 18:32 | |
zykes- | can we share a document ? | 18:32 |
guaqua | i'm looking at yours | 18:32 |
guaqua | let's say you have 12 2 TB drives | 18:33 |
guaqua | and you have 4 nodes | 18:33 |
guaqua | and you put them into 4 zones | 18:33 |
zykes- | 4 servers yeah | 18:33 |
zykes- | that's theoretically 4*12 nodes | 18:33 |
zykes- | in "swift" terms | 18:33 |
guaqua | so each zone consists of one physical server | 18:33 |
guaqua | i think a node is a server | 18:33 |
guaqua | 4 * 12 devices | 18:34 |
zykes- | swift_node then ;) | 18:34 |
zykes- | i only need 3 zones no ? | 18:34 |
guaqua | i'm not exactly sure how many one would like to have | 18:34 |
JesperA | 5 is recomended minimum but 3 would work | 18:35 |
zykes- | i think 4 is "recommended" if i remember notmyname correctly | 18:35 |
zykes- | 5 is "optimal" | 18:35 |
zykes- | :p | 18:35 |
guaqua | what makes 5 the desired count? | 18:35 |
*** dolphm has quit IRC | 18:35 | |
*** krow has quit IRC | 18:39 | |
*** mgoldmann has quit IRC | 18:40 | |
*** cereal_bars has joined #openstack | 18:41 | |
*** nerens has quit IRC | 18:43 | |
*** po has joined #openstack | 18:49 | |
*** bengrue has joined #openstack | 18:52 | |
*** redconnection has joined #openstack | 18:52 | |
*** clauden__ has quit IRC | 18:56 | |
zykes- | ooho | 18:59 |
zykes- | cool thing to see the difference in pricing for a node with 48 tb | 18:59 |
zykes- | contra 24 | 18:59 |
zykes- | guaqua: ? | 19:04 |
zykes- | anyone here got calculations or so on what the formula is for the cost of a swift deployment ? | 19:07 |
*** javiF has quit IRC | 19:08 | |
*** jkyle_ has quit IRC | 19:10 | |
notmyname | zykes-: what are you looking for beyond the cost of your hardware? | 19:10 |
zykes- | notmyname: i'm trying to make cost of hardware + cost gb/month | 19:11 |
notmyname | your opex will be determined by your DC (or whatever) space and the cost of people to keep it running (including replacing broken hardware) | 19:12 |
JesperA | impossible to say if we dont know your hosting costs | 19:12 |
zykes- | notmyname: yeah | 19:12 |
zykes- | but firstly i'm doing gb pr month based on server purchase costs | 19:12 |
notmyname | isn't any per month cost entirely dependent on your operational setup (including your hosting costs)? | 19:13 |
zykes- | notmyname: correct | 19:13 |
zykes- | but i'm doing just servers purchase costs atm | 19:13 |
notmyname | ah ok | 19:14 |
notmyname | switches + LBs + cabling + servers + drives | 19:14 |
notmyname | no software licensing costs, though ;-) | 19:14 |
JesperA | =) | 19:15 |
notmyname | zykes-: cloudscaling gave a presentation about 6-8 months ago that was in the neighborhood of 600K - 750K per PB for initial cap ex costs | 19:16 |
zykes- | 600k notmyname ? | 19:16 |
notmyname | at the santa clara design summit | 19:16 |
praefect_ | anybody remembers where I can find smosers latest images? | 19:16 |
Glacee | notmyname: hehe seems expensive.. | 19:16 |
zykes- | notmyname: we're (if we're doing this) a pretty basic setup | 19:16 |
notmyname | Glacee: depends on what you compare it to. :-) I haven't priced multi-PB SANs recently, but I hear they are expensive | 19:17 |
Glacee | 1PB to sell or 1PB before the copy of 3 files? | 19:17 |
zykes- | with 3 nodes of 24*2TB nodes with 2*gig for storage net pr node | 19:17 |
notmyname | Glacee: 1PB billable | 19:17 |
Glacee | notmyname: ah ok :) | 19:17 |
coli | zykes: to be on the safe side with investment consider your first capex cost as your loss and concentrate on opex and ebidta | 19:19 |
Glacee | notmyname: im around that number with the setup we are planning.. but that is with the crazy drives price right now | 19:20 |
Glacee | even lower than that... propably due to more density that the cloudscaling setup | 19:20 |
notmyname | Glacee: that's good to hear (it means "normal" prices are cheaper) | 19:20 |
Glacee | but.. thats prototyping.. :) we are starting with a lot less than 1PB :0 | 19:21 |
notmyname | drives dominate the cost of the cluster as it gets bigger | 19:21 |
Glacee | and my concercn is HDD vibration.. we will see how it holds up | 19:21 |
Glacee | using rubber and stuff on consumer grade drives :) | 19:22 |
notmyname | it gets even better as 3TB drives get priced better | 19:22 |
Glacee | wer using 3TB yeah | 19:22 |
zykes- | notmyname: cross region replication | 19:22 |
Glacee | thats propably where the difference is with the cloudscaling 2TB setup | 19:22 |
notmyname | Glacee: ya | 19:23 |
zykes- | is that planned notmyname ? | 19:23 |
notmyname | zykes-: what about it? | 19:23 |
notmyname | zykes-: define "region" | 19:23 |
zykes- | "Large Single Uploads (Pending Approval) | 19:23 |
*** po has quit IRC | 19:23 | |
zykes- | + "Multi-region support (Future - Not Started)" | 19:23 |
zykes- | is a thing we would want :/ | 19:23 |
notmyname | zykes-: replication across a wide geographic area (ie with higher latency) is definitely on the "we need to figure this out" list | 19:24 |
notmyname | zykes-: the large single objects handled by the proxy rather than the client is planned, too. probably sooner than high latency replication (but I don't set the dev priorities, only try to argue what they should be) | 19:25 |
notmyname | zykes-: we'd also be happy to review any patches submitted for these *hint*hint* | 19:26 |
Glacee | notmyname: the container replication between cluster.. I thought that was for the wide geo replication | 19:26 |
zykes- | notmyname: how about say you have datacenter x then like 5-6 km away you have datacenter y ? | 19:26 |
zykes- | is that "low latency" replication as well ? | 19:26 |
notmyname | Glacee: ya, it's a start | 19:26 |
Glacee | notmyname:at least with that you can offer.. some kind of Multi-Region DR | 19:27 |
notmyname | zykes-: ya, that's probably not an issue now (you may have to slightly adjust some timeout settings) | 19:27 |
zykes- | ok | 19:27 |
zykes- | what link is recommended ? | 19:27 |
Glacee | well I think.. does that include object replication between cluster? | 19:27 |
notmyname | between the DCs? as big as possible ;-) | 19:27 |
zykes- | notmyname: so 200 mbit isn't enough ? ;p | 19:27 |
notmyname | zykes-: it could be. just depends on your use case :-) | 19:28 |
notmyname | and how big you want your eventual consistency window to be | 19:28 |
zykes- | is 3 zones sufficient for a start ? | 19:29 |
notmyname | Glacee: container sync is a start to multi-geography clusters, but what I would like to see is one logical cluster that is able to span an ocean | 19:29 |
notmyname | zykes-: only 3 zones doesn't give you any handoff zones in case of failure. I'd recommend starting with 4. 3 is minimum, 4 is better, 5+ is ideal | 19:30 |
Glacee | notmyname: ohh that is interesting.. a Multi-Region Cluster | 19:30 |
zykes- | ah ok | 19:30 |
Glacee | thats ambitious :) | 19:30 |
*** sdake has quit IRC | 19:30 | |
notmyname | indeed :-) | 19:30 |
*** arBmind has joined #openstack | 19:31 | |
zykes- | notmyname: is there a "list" of stuff going into essex ? | 19:32 |
Glacee | https://blueprints.launchpad.net/swift | 19:34 |
Glacee | that would be a start zykes | 19:34 |
notmyname | zykes-: not a complete on yet | 19:35 |
notmyname | one | 19:35 |
Glacee | notmyname: are you going to LISA ? | 19:36 |
notmyname | zykes-: I expect to have a few more details soon-ish. as I figure out what the various people using swift are working on | 19:36 |
notmyname | Glacee: since I don't know what that is, I'm going to say "no" | 19:36 |
Glacee | ok too bad :) | 19:37 |
Glacee | http://www.usenix.org/events/lisa11/index.html | 19:37 |
notmyname | ah. I'll be traveling to San Francisco that week | 19:38 |
Glacee | too bad | 19:38 |
*** dailylinux has joined #openstack | 19:46 | |
guaqua | notmyname: any idea where that presentation might have been? i'd like to see the slides... :) | 19:49 |
*** shaon has quit IRC | 19:50 | |
*** GheRivero_ has joined #openstack | 19:51 | |
notmyname | guaqua: found it. http://joearnold.com/2011/04/28/openstack-conference-commercializing-object-storage-swift/ | 19:53 |
guaqua | massive! thanks! | 19:58 |
*** foexle has joined #openstack | 19:59 | |
foexle | ahoi | 20:00 |
*** GheRivero_ has quit IRC | 20:10 | |
*** GheRivero_ has joined #openstack | 20:10 | |
*** koolhead17 has quit IRC | 20:12 | |
*** catintheroof has joined #openstack | 20:12 | |
*** miclorb_ has joined #openstack | 20:13 | |
*** pixelbeat has quit IRC | 20:16 | |
*** miclorb_ has quit IRC | 20:17 | |
zykes- | Glacee: which stuff is going in for sure atm ? | 20:18 |
zykes- | notmyname: how long would you reckon to implement a 4 zone with 1 node pr zone setup ? | 20:18 |
notmyname | you mean how long to plug it all in? or to configure? | 20:20 |
*** cereal_bars has quit IRC | 20:24 | |
zykes- | all in all | 20:25 |
zykes- | i wonder if ~20 hours | 20:25 |
zykes- | or so | 20:25 |
JesperA | proxy servers for Swift isnt harddrive intensive, right? | 20:27 |
notmyname | zykes-: I don't think I can answer that for you | 20:27 |
JesperA | Every usefull info should already be loaded in RAM on the proxy? | 20:27 |
Glacee | 20hours.. to have a workable swift cluster... thats ambitious :0 | 20:27 |
notmyname | JesperA: correct. CPU, RAM, network | 20:27 |
JesperA | good | 20:28 |
notmyname | JesperA: the proxy doesn't cache objects | 20:28 |
Glacee | zykes: for a production cluster.. from start to prod.. I am aiming 3-4months and I find it ambitious :0 | 20:29 |
JesperA | no i know, but it store the storage node info so it "redirects" trafic to the nodes upon request? Maybe got that wrong | 20:29 |
*** negronjl has joined #openstack | 20:30 | |
*** cereal_bars has joined #openstack | 20:30 | |
zykes- | Glacee: for "initial" config i've already done stuff | 20:31 |
zykes- | Glacee: for a "working" initial setup that works i think | 20:31 |
*** debo-os has joined #openstack | 20:34 | |
*** arBmind has quit IRC | 20:39 | |
zykes- | notmyname: what would happen to data if you have 2 zones in 1 place and 2 zones in one other data center | 20:39 |
zykes- | and then link goes down between ? | 20:39 |
notmyname | zykes-: should still work with no problem (assuming no other HDD failures, etc) | 20:40 |
*** dysinger has joined #openstack | 20:40 | |
*** dysinger1 has joined #openstack | 20:43 | |
zykes- | notmyname: you know what immediate features are going in at essex ? | 20:44 |
*** dysinger has quit IRC | 20:45 | |
*** coli has quit IRC | 20:46 | |
notmyname | zykes-: beyond the 1.4.4 changelog (from today), I can't tell you a specific list right now. however, I can say that our focus right now is around scale (both scaling up and scaling down) and polish (bugfixes and feature augmentation) | 20:46 |
notmyname | for example, I'd like it to be easier for smaller clusters to be deployed | 20:47 |
notmyname | and I'd also like to see some features that allow for even bigger scale (like container sharding) | 20:47 |
notmyname | as far as feature enhancements, I'd like to see stuff like automatic manifest creation on large objects | 20:48 |
zykes- | notmyname: what about metadata ? | 20:48 |
notmyname | and improvements on resource usage for internal processes (replication, auditors, etc) | 20:49 |
notmyname | what about metadata? | 20:49 |
*** dysinger has joined #openstack | 20:49 | |
*** dysinger1 has quit IRC | 20:49 | |
zykes- | searchable + filemetadata | 20:50 |
*** GheRivero_ has quit IRC | 20:50 | |
Glacee | notmyname: what about object versioning? Is that something that is of any interest for you? | 20:50 |
notmyname | Glacee: of limited interest. there are some technical challenges to making it work well on the server-side (not that that should be a blocker....) and it's easy to do _very_ well on the client side | 20:51 |
notmyname | Glacee: so I'd like to see it, but it's lower on the priority list | 20:51 |
*** dysinger has quit IRC | 20:51 | |
*** dysinger1 has joined #openstack | 20:51 | |
Glacee | notmyname: good to know | 20:51 |
notmyname | the main focus now is polish (rather than adding new stuff) | 20:52 |
notmyname | I've written about this here http://programmerthoughts.com/openstack/swift-state-of-the-project/ | 20:52 |
zykes- | notmyname: how hard is it to add filemetadata + search to it ? | 20:54 |
*** jkyle has joined #openstack | 20:54 | |
notmyname | zykes-: you mean sorting on arbitrary metadata? (we already support setting arbitrary metadata on objects) | 20:54 |
notmyname | and by sorting, I'm specifically referring to the ordering of container listings | 20:55 |
Glacee | Interesting article.. thanks | 20:55 |
zykes- | notmyname aren't there other companies working on swift? | 20:57 |
notmyname | zykes-: there are many companies deploying (and probably doing some internal dev). but there haven't been any large contributions to swift from outside of rackspace | 20:58 |
zykes- | :/ | 20:58 |
zykes- | sad that nova is taking the starlight ;p | 20:58 |
*** jkyle has quit IRC | 20:58 | |
notmyname | indeed ;-) | 20:58 |
notmyname | actually, I think it's great they get a lot of attention. lots of people are interested in the cloud, and most people think "compute" when the hear "cloud" | 20:59 |
notmyname | I think storage is fundamental | 20:59 |
*** jmckenty has joined #openstack | 20:59 | |
notmyname | but nova has different challenges to face. some of which are because it gets so much focus from a diverse group of people | 21:00 |
Glacee | agreed.. if I check thenew "cloud" provider in my region.. their cloud is compute only | 21:01 |
notmyname | but as swift PTL, I think part of my job is to do some amount of "tech evangelism" for swift so we do get more people involved in contributing code to it | 21:01 |
zykes- | ;) | 21:02 |
Glacee | I think that object storage is the foundation of a webscale application | 21:02 |
zykes- | how is "future-large-single-uploads" handled now ? | 21:02 |
zykes- | and "future-searchable-metadata" | 21:02 |
Glacee | thats why we started our project with object storage instead of compute | 21:03 |
*** jmckenty has quit IRC | 21:03 | |
Glacee | I think to have a real cloud.. you need both really | 21:03 |
notmyname | zykes-: implementing either one of those is up to the client now. for example, the client has to split the object and create the manifest. or the client can maintain a separate metadata store about the obejcts | 21:03 |
zykes- | k | 21:04 |
zykes- | how is s3 doing it ? | 21:04 |
*** rustam has joined #openstack | 21:04 | |
notmyname | I'm not aware that they support either of those | 21:05 |
zykes- | so if i wanted to store something larger then 5 gb today how is that handled ? | 21:06 |
*** rbp has joined #openstack | 21:06 | |
notmyname | zykes-: split the large object into chunks <5GB. upload them. then create a zero-byte file with the appropriate x-object-manifest header. when the manifest object is fetched, it will stream all the parts serially | 21:07 |
Glacee | containers sharding.. interesting :) | 21:07 |
notmyname | zykes-: http://swift.openstack.org/overview_large_objects.html | 21:07 |
notmyname | Glacee: ya, that should eliminate the practical limitation of containers with high cardinality | 21:08 |
*** redconnection has quit IRC | 21:09 | |
JesperA | Containers works like folders right? If i want to look up the filesize in PHP i can provide the link to the file in PHP and it would calculate it? Or do i have to store that kind of values in the database? (stupid question, i know, but i have to be sure). | 21:09 |
JesperA | From a webserver to Swift storage that is | 21:09 |
notmyname | JesperA: containers are only sortof like folders (in that swift only sorof has filesystem similarities--swift isn't a filesystem). technically, a container is a namespace within an account | 21:11 |
notmyname | JesperA: the best way to get the size of an object is to HEAD the object and look at the headers | 21:11 |
*** redconnection has joined #openstack | 21:12 | |
*** dysinger1 has quit IRC | 21:12 | |
JesperA | notmyname ok so there is no real path to an object? | 21:12 |
notmyname | JesperA: depends on what you mean by "path". each object is referenced by a unique URL of the form <storagedomain>/v1/<account>/<container>/<object> | 21:13 |
*** coli has joined #openstack | 21:14 | |
*** TheOsprey has quit IRC | 21:14 | |
JesperA | ok, hmm, is it possible to symlink into that structure from the webserver? | 21:15 |
notmyname | no. you can't mount swift | 21:15 |
notmyname | you may find a fuse layer for it, but it will have some serious performance limitations | 21:15 |
Kiall | JesperA: it sounds like you don't fully understand what Swift is.. Consider swift an web service API to store files.. Thats the only interface your application can know of.. | 21:16 |
Kiall | The files could be on a server on the other side of the world, local file system access to files is not possible with Swift. | 21:16 |
JesperA | Kiall yeah i know, but i was just looking for an easy way to move files from the webserver into the swift storage | 21:17 |
Kiall | a HTTP PUT request :) | 21:17 |
*** nacx has joined #openstack | 21:18 | |
JesperA | still wouldnt work with alot of our PHP code | 21:19 |
JesperA | but i guess rewriting those is our smallest problem :P | 21:19 |
*** rsampaio has joined #openstack | 21:19 | |
notmyname | JesperA: have you seen the php language bindings for swift? (actually they are for rackspace cloud files, but they will work with any swift deployment) | 21:19 |
JesperA | Nope i have not seen those | 21:20 |
JesperA | https://github.com/notmyname/php-cloudfiles | 21:21 |
JesperA | ? | 21:21 |
notmyname | JesperA: https://github.com/rackspace/php-cloudfiles | 21:21 |
notmyname | ya | 21:21 |
JesperA | Great, thanks | 21:25 |
*** TheOsprey has joined #openstack | 21:28 | |
*** sdake has joined #openstack | 21:32 | |
zykes- | oh dang notmyname | 21:34 |
zykes- | is 1.4.5 the current ? | 21:34 |
notmyname | zykes-: as of today, yes :-) | 21:35 |
notmyname | well, 1.4.4 was released today | 21:35 |
notmyname | so the code has 1.4.5-dev set at the version | 21:35 |
zykes- | so that features is in the current version or ? | 21:35 |
notmyname | which feature? the large objects? | 21:35 |
zykes- | link you pasted | 21:35 |
*** PotHix has quit IRC | 21:36 | |
*** PotHix has joined #openstack | 21:36 | |
*** krow has joined #openstack | 21:37 | |
JesperA | notmyname sorry, stupid question again, the API is needed when a webserver wants to delete files too? | 21:37 |
notmyname | zykes-: ya, it's been in swift for a while. looking for the link | 21:37 |
zykes- | notmyname: what's a "ring" and ring builder ? | 21:37 |
zykes- | and recon | 21:38 |
notmyname | zykes-: large objects was added almost exactly one year ago https://code.launchpad.net/~gholt/swift/lobjects4/+merge/43596 | 21:38 |
notmyname | JesperA: the only way to interact with swift is through the http api. the language bindings add some helper functions for that. | 21:39 |
notmyname | zykes-: recon is a tool for deployers that allows swift to report on itself | 21:39 |
zykes- | deployers meaning ? | 21:40 |
notmyname | zykes-: recon http://swift.openstack.org/admin_guide.html#cluster-telemetry-and-monitoring | 21:40 |
notmyname | zykes-: deployers == the person running the swift cluster | 21:40 |
JesperA | notmyname oh ok, we delete every files that is stored within 3 months but that wont be a problem using the http api then? | 21:41 |
notmyname | zykes-: rings http://swift.openstack.org/overview_ring.html | 21:41 |
notmyname | JesperA: no problem | 21:41 |
JesperA | Awesome | 21:42 |
JesperA | I love that these kind of stuff is open source, awesome job | 21:43 |
*** koolhead17 has joined #openstack | 21:43 | |
notmyname | JesperA: that's thanks to some very hard work by some execs at rackspace and nasa | 21:43 |
JesperA | Yeah, considering the hard work it is even more impressive that it is open source :P | 21:45 |
JesperA | Must have taken a huge amount of time getting to the point where it is today | 21:45 |
*** sdake has quit IRC | 21:45 | |
zykes- | notmyname: can one do like a swift rebalance ? | 21:46 |
Kiall | zykes-: sounds like MogileFS terms? Moving from MFS? | 21:46 |
zykes- | Kiall: like to "balance" data | 21:46 |
zykes- | or does it do that automagically | 21:47 |
notmyname | zykes-: the data is automatically balanced throughout the cluster (and rebalances itself as you resize the cluster) | 21:47 |
zykes- | ok | 21:48 |
zykes- | so say you change a drive | 21:48 |
zykes- | in node x of 4 zones and you do that for each node in each zone | 21:49 |
zykes- | then it ottomatically scales up ? | 21:49 |
notmyname | yup | 21:49 |
zykes- | but then it's like a raid, you need to scale up at least 1 node pr zone ? | 21:50 |
Kiall | Was handed 5 old-ish but not too old servers today and asked to take anything useful out.. So far I have 30x 250GB SATA150 HD's -_- Waste of effort undoing 120 screws! | 21:50 |
notmyname | it's a good idea to keep the zones the same size. you can either expand the zones or add new zones | 21:50 |
zykes- | notmyname: but then | 21:53 |
zykes- | say you do like you some times do in a riad | 21:53 |
zykes- | you got "working" existing hardware but with small disks | 21:53 |
notmyname | and you want to upgrade disks or add larger disks | 21:53 |
notmyname | ? | 21:53 |
notmyname | no problem | 21:53 |
zykes- | and the baseline hardware takes +X tb disks contra the ones you have | 21:53 |
zykes- | that's what i meant | 21:53 |
zykes- | what commands etc do you use ? | 21:54 |
*** cereal_bars has quit IRC | 21:54 | |
*** miclorb_ has joined #openstack | 21:54 | |
*** catintheroof has quit IRC | 21:54 | |
notmyname | as you are adding devices to the ring, set the weight appropriately. a good start is to set the weight to the number of GB in each drive. for example, a 2TB drive can have a weight of 2000 and a 3 TB drive has a weight of 3000 | 21:55 |
notmyname | the weights don't mean anything except in relation to one another | 21:55 |
notmyname | and it's used to ensure that heterogeneous drives grow evenly | 21:55 |
notmyname | it also allows you do slowly fill or drain a particular drive (or group of drives) by slowly raising or lowering the associated weight | 21:56 |
zykes- | ah | 21:56 |
notmyname | for example, when we add zones at rackspace, we add them over a period of time by raising the weights | 21:57 |
zykes- | notmyname: which frontends do you know of for swift ? | 21:57 |
notmyname | 25%, 50%, 75%, 100% | 21:57 |
jasona | hmm | 21:57 |
zykes- | that's when you add a zone like cluster? | 21:58 |
jasona | morning! | 21:58 |
*** sdake has joined #openstack | 21:58 | |
*** foexle has quit IRC | 21:58 | |
notmyname | zykes-: cyberduck, one or more iOS apps, | 21:58 |
zykes- | smestorage ? | 21:59 |
notmyname | zykes-: ya, when we add zones to existing clusters | 21:59 |
*** cloudgeek has quit IRC | 21:59 | |
zykes- | hmm, pardon but what's the diff on a cluster and a zone ? | 22:00 |
notmyname | cluster == many zones | 22:00 |
notmyname | a zone is just a partition of availability in your deployment | 22:01 |
notmyname | perhaps it is an isolated DC room. or set of cabinets with a separate power supply | 22:01 |
notmyname | that is highly dependent on your deployment details | 22:02 |
*** nati2 has joined #openstack | 22:02 | |
Glacee | notmyname: questions about rings.. lets say that your initial setup contained a certain amount of partitions when you created the ring.. let say that you ring grows that each device has 100 partitions.. your screwed? | 22:04 |
*** ChrisAM has quit IRC | 22:04 | |
jasona | asking the same q as before (so the rest of you can ignore me ;-) but.. anyone here have a purchasing spec for openstack hardware ? | 22:04 |
Glacee | jasona: depends on your use case :0 | 22:04 |
jasona | glacee: development node for university researchers. | 22:05 |
notmyname | Glacee: yes :-) at least only without a _lot_ of effort on your part | 22:05 |
jasona | looking to run about 200-300 VMs | 22:05 |
Glacee | hehe.. at rackspace.. you propably set the partitions number at a very high number? | 22:05 |
jasona | a mix of 2-4-8 core VMs. between 4 and 16G of ram each mostly. | 22:05 |
*** zykes- has quit IRC | 22:05 | |
notmyname | Glacee: changing your partition power would require that you rehash all of the data in the cluster. that means you have to migrate it all (GET(old ring)+PUT(new ring)) | 22:05 |
*** edolnx has quit IRC | 22:06 | |
*** vipul_ has joined #openstack | 22:06 | |
Kiall | jasona: rackspace have a published "Reference Architecture".. might be of use for you.. http://www.referencearchitecture.org/hardware-specifications/ | 22:06 |
jasona | i had a lok | 22:06 |
*** ChrisAM1 has joined #openstack | 22:06 | |
*** vipul_ has quit IRC | 22:06 | |
*** Xenith has quit IRC | 22:06 | |
jasona | it is sort of useful but kinda mostly if you want to buy dell. which is why i was hoping to get some more feedback :) | 22:06 |
jasona | i was looking for more generic feedback from people | 22:07 |
Kiall | really though, its hard to put any numbers on it without knowing the workload etc etc.. | 22:07 |
Glacee | notmyname: yeah thats what I thought.. I will put a crazy number to start with and see how it reacts :0 | 22:07 |
*** pasik has quit IRC | 22:07 | |
notmyname | jasona: I'mnot sure who came up with that reference architecture list | 22:07 |
*** edolnx has joined #openstack | 22:07 | |
jasona | well, i would define the workload as mixed use, small to medium. i was figuring on 3-4 compute nodes, some object storage etc | 22:07 |
*** pasik has joined #openstack | 22:07 | |
*** Xenith has joined #openstack | 22:07 | |
jasona | i.e the point is to have a working openstack cluster that we can give researchers to actually do some work on | 22:08 |
jasona | rather than hand creating all the KVM machines they need | 22:08 |
jasona | we're going to give them about 200-500T of storage to do stuff with alongside that. (most of which is not swift) | 22:08 |
notmyname | Glacee: don't go too big | 22:08 |
*** lionel has quit IRC | 22:08 | |
Kiall | jasona: really, thats not the workload.. the workload is more along the lines of what are they using those VMs for.. is it I/O intensive? RAM intensive? CPU intensive etc etc... | 22:08 |
jasona | ahhh | 22:08 |
notmyname | Glacee: you should be able to come up with a reasonable number | 22:08 |
*** lionel has joined #openstack | 22:09 | |
Glacee | notmyname: thanks for the advise.. would 2^30 is reasonable or Im crazy? | 22:09 |
Kiall | At the end of the day, its that sort of information which will tell you what hardware to buy... | 22:09 |
notmyname | that's _huge_ | 22:09 |
jasona | it's genomics researchers ? :) yes they like more ram. cpu not so much. i/o intensive yes but only in moving large amounts of data | 22:09 |
Glacee | hahah yeah thats what I thought :) | 22:09 |
jasona | i.e if they need to move 100G files.. and a few T of data to solve basic problems. but they aren't generating large i/o loads other than as workflow | 22:09 |
notmyname | Glacee: ya, that allows you to have 10737418 storage volumes | 22:09 |
notmyname | that's almost 200K 60-node servers | 22:10 |
Glacee | lol yeah | 22:10 |
notmyname | I somehow doubt you'll get a cluster that big | 22:10 |
jasona | unless glacee works for blizzard ? :) | 22:11 |
Kiall | jasona: you're not going to be able to articulate the workloads in an IRC chat ;) And the end of the day, you need to figure out what the users will be doing (in terms of CPU/RAM/disk I/O/network I/O etc) then size the hardware to handle that... | 22:11 |
Glacee | jasona: hell no.. I would be a shame now their panderia release.. what a joke | 22:11 |
jasona | kiall: the users can't give me that now | 22:12 |
Kiall | then you cant size the hardware accurately :) | 22:12 |
jasona | kiall: and i have to take a stab at close enough, since this has to be ordered in 3 weeks.. | 22:12 |
jasona | or it sets of a chain reaction that kills a bunch of project stuff :) | 22:12 |
jasona | maybe i can't do it accurately. can i try for 'in the same city' even if i can't get 'in the same ballpark' ? | 22:13 |
Kiall | lol, then you need to get in front of them and annoy them until they tell you what they need ;) | 22:13 |
Glacee | is 2^22 more reasonable or still crazy in your openion? | 22:13 |
notmyname | Glacee: partition power of 30 would allow you to have a cluster with nearly 5000PB of _billable_ storage (at 80% full) | 22:13 |
notmyname | assuming you use 2TB drives | 22:14 |
jasona | kiall: they honestly can't give me more and they have no incentive to do that anyway. | 22:14 |
Glacee | yeah.. I realised that it was a crazy number after posting to the channel :) | 22:14 |
jasona | the cluster being built is being built partly to get them to look at this stuff and use it | 22:14 |
*** cloudgeek has joined #openstack | 22:14 | |
jasona | i.e get them interested in openstack and using the paas in the future. rather than the jillion small clusters around the place | 22:14 |
notmyname | Glacee: 20 or 22 is quite reasonable for large clusters | 22:14 |
jasona | so with that in mind, taking a median approach, more feedback on hardware spec ? :) | 22:15 |
Kiall | jasona: then really, all you can do is guess.. start small. Since this is "get them to look at this stuff and use it", start with as little hardware as possible. Then you'll see what people really use. | 22:15 |
Glacee | yeah 41PB of billable storage with 2^22 | 22:16 |
notmyname | Glacee: 22 gives you 19PB of billable storage with 2TB drives | 22:16 |
Glacee | using 3TB drives | 22:16 |
notmyname | heh | 22:16 |
Glacee | hmm | 22:16 |
notmyname | 2**22 / 100 * 2000 * .91 * .8 / 3 / 2**20 | 22:16 |
notmyname | .91 is marketing to actual formatted size | 22:16 |
notmyname | .8 is 80% full | 22:17 |
notmyname | 3 for replica count | 22:17 |
notmyname | 2**20 to convert from GB to PB | 22:17 |
notmyname | .91 is ok for 2TB drives. it will be different for 3TB drives | 22:17 |
Glacee | thanks.. I will keep that formula.. handy... | 22:17 |
Glacee | ok I will check with 3TB | 22:18 |
Kiall | jasona: eg start with closer to commodity hardware.. eg 1Gb ethernet not 10g, you'll quickly see if there really is a need for 10g, of if more CPU is needed, that way - you have the budget left to upgrade etc | 22:18 |
notmyname | 3TB marketing == 3000000000 bytes unformatted. format and convert to bast 2 measurements to get the proper ratio | 22:18 |
*** nati2 has quit IRC | 22:20 | |
Glacee | from a few website it seems like .91 also for 3TB | 22:21 |
*** nati2 has joined #openstack | 22:21 | |
Glacee | Formatted capacity 2.72TB | 22:21 |
notmyname | Vl2**22 / 100 * 2794 * .8 / 3 / 2**20 for 3TB (2794 is number of GB in a 3TB drive) | 22:21 |
*** rbp has left #openstack | 22:22 | |
notmyname | 2**22 / 100 * 2794 * .8 / 3 / 2**20 = 29.80263824462891PB billable | 22:22 |
notmyname | ah ok. 2.72 formatted | 22:22 |
notmyname | still. 29PB | 22:22 |
Glacee | 52M/year at 0.15gb | 22:25 |
Glacee | not bad :0 | 22:25 |
notmyname | heh :-) | 22:25 |
JesperA | How big is the biggest implementation of Swift? | 22:26 |
JesperA | (biggest knowned) | 22:26 |
notmyname | JesperA: "billions of files, petabytes of data" (unfortunately, that's all rackspace let's me say) | 22:26 |
Kiall | I would imagine Rackspace, but I doubt they give specifics... | 22:27 |
notmyname | but our clusters are larger than all the other published numbers I've seen | 22:27 |
Glacee | if they cluster reach that capacity.. thats what I call a Champagne problem :0 | 22:27 |
Glacee | the* | 22:27 |
JesperA | =) | 22:27 |
notmyname | indeed | 22:27 |
Glacee | do you think that 2^22 my be too slow if we start with around 140 devices? | 22:28 |
Glacee | may* | 22:28 |
notmyname | Glacee: the slowdowns associated with a larger ring size only come with updating the ring (which is done offline) | 22:29 |
notmyname | Glacee: the other worry is the extra filesystem overhead for all the directory entries | 22:29 |
Glacee | ok | 22:30 |
Glacee | thanks again for your help.. that was instructive.. heading out | 22:30 |
notmyname | have a good day | 22:31 |
Glacee | you too thanks | 22:31 |
*** Guest68173 has quit IRC | 22:32 | |
*** nacx has quit IRC | 22:53 | |
*** hugokuo has joined #openstack | 22:56 | |
coli | notmyname: are you working in UK or US ? | 22:56 |
jasona | kiall: i'm definitely looking at commodity for kit. just trying to differentiate | 22:56 |
jasona | kiall: wondering how other people pick between dell vs hp vs ibm vs.. | 22:56 |
coli | kiall: hi, in my opinnion your script needs change in ec2_dmz_host parametr in nova.conf , it should point to local compute node where nova-api is running and not to controler. if it points to a controler then instance is unable to communicate with 169.254.169.254 (when compute node is on different host then controler) | 22:57 |
coli | jason: we use supermicro for years and didn't havy much problems. softlayer is using them as well in large numbers. | 22:58 |
coli | kiall: to be specific instance is able to communicate with 169.254.169.254 (that is nova-api) but nova-api is unable to retrive metadata as the packets arrive with source address of the compute node and not of the instance. | 22:59 |
tjoy | jasona: supermicro does make good gear | 23:01 |
jasona | no argument there. but if they don't supply through existing gov contracts etc.. probably can't use them | 23:02 |
notmyname | coli: us | 23:03 |
tjoy | coli: isn't 169.254.whatever a local address like 127.0.0.1 ? am i missing something important? | 23:04 |
coli | tjoy: 169.254.169.254 is being DNAT'ed to value of ec2_dmz_host and port 8873 (default unless set by parameter) | 23:05 |
coli | upps port 8773 | 23:06 |
*** MarkAtwood has quit IRC | 23:12 | |
*** woleium has quit IRC | 23:12 | |
*** camm has quit IRC | 23:12 | |
*** webx has quit IRC | 23:12 | |
*** nelson1234 has quit IRC | 23:12 | |
*** clayg has quit IRC | 23:12 | |
*** lucas has quit IRC | 23:12 | |
*** cloud0_ has quit IRC | 23:12 | |
*** guaqua has quit IRC | 23:12 | |
*** medberry has quit IRC | 23:12 | |
*** Spirilis has quit IRC | 23:12 | |
*** Eyk^off has quit IRC | 23:12 | |
*** zz_bonzay has quit IRC | 23:12 | |
*** Aurelgadjo has quit IRC | 23:12 | |
*** Aim has quit IRC | 23:12 | |
*** miclorb_ has quit IRC | 23:12 | |
*** al has quit IRC | 23:12 | |
*** PiotrSikora has quit IRC | 23:12 | |
*** Pommi has quit IRC | 23:12 | |
*** cdub has quit IRC | 23:12 | |
*** agy has quit IRC | 23:12 | |
*** vidd-away has quit IRC | 23:12 | |
*** Kiall has quit IRC | 23:12 | |
*** pquerna has quit IRC | 23:12 | |
*** aimka has quit IRC | 23:12 | |
*** anticw has quit IRC | 23:12 | |
*** iRTermite has quit IRC | 23:12 | |
*** opsnare has quit IRC | 23:12 | |
*** Vek has quit IRC | 23:12 | |
*** termie has quit IRC | 23:12 | |
*** cclien has quit IRC | 23:12 | |
*** martin has quit IRC | 23:12 | |
*** blahee has quit IRC | 23:12 | |
*** olafont_ has quit IRC | 23:12 | |
*** akscram has quit IRC | 23:12 | |
*** cw has quit IRC | 23:12 | |
*** krow has quit IRC | 23:12 | |
*** rustam has quit IRC | 23:12 | |
*** wariola has quit IRC | 23:12 | |
*** dubenstein has quit IRC | 23:12 | |
*** rods has quit IRC | 23:12 | |
*** JStoker has quit IRC | 23:12 | |
*** HugoKuo_ has quit IRC | 23:12 | |
*** ollie1 has quit IRC | 23:12 | |
*** sticky has quit IRC | 23:12 | |
*** shang has quit IRC | 23:12 | |
*** obino has quit IRC | 23:12 | |
*** nid0 has quit IRC | 23:12 | |
*** AntoniHP has quit IRC | 23:12 | |
*** andyandy_ has quit IRC | 23:12 | |
*** sloop has quit IRC | 23:12 | |
*** Lumiere has quit IRC | 23:12 | |
*** russellb has quit IRC | 23:12 | |
*** martines has quit IRC | 23:12 | |
*** j^2 has quit IRC | 23:12 | |
*** datajerk has quit IRC | 23:12 | |
*** agoddard has quit IRC | 23:12 | |
*** floehmann has quit IRC | 23:12 | |
*** cmagina has quit IRC | 23:12 | |
*** root_ has quit IRC | 23:12 | |
*** mencken has quit IRC | 23:12 | |
*** Daviey has quit IRC | 23:12 | |
*** dendro-afk has quit IRC | 23:12 | |
*** Hunner has quit IRC | 23:12 | |
*** royh has quit IRC | 23:12 | |
*** hugokuo has quit IRC | 23:12 | |
*** cloudgeek has quit IRC | 23:12 | |
*** alekibango has quit IRC | 23:12 | |
*** jsh has quit IRC | 23:12 | |
*** dgags has quit IRC | 23:12 | |
*** binbash_ has quit IRC | 23:12 | |
*** cburgess has quit IRC | 23:12 | |
*** n0ano has quit IRC | 23:12 | |
*** benner has quit IRC | 23:12 | |
*** keekz has quit IRC | 23:12 | |
*** kirkland has quit IRC | 23:12 | |
*** perlstein has quit IRC | 23:12 | |
*** rwmjones has quit IRC | 23:12 | |
*** jbarratt_ has quit IRC | 23:12 | |
*** uvirtbot has quit IRC | 23:12 | |
*** troytoman-away has quit IRC | 23:12 | |
*** Xenith has quit IRC | 23:12 | |
*** pasik has quit IRC | 23:12 | |
*** edolnx has quit IRC | 23:12 | |
*** ChrisAM1 has quit IRC | 23:12 | |
*** koolhead17 has quit IRC | 23:12 | |
*** tryggvil_ has quit IRC | 23:12 | |
*** map_nw has quit IRC | 23:12 | |
*** odyi has quit IRC | 23:12 | |
*** chmouel has quit IRC | 23:12 | |
*** arun has quit IRC | 23:12 | |
*** mu574n9 has quit IRC | 23:12 | |
*** kerouac has quit IRC | 23:12 | |
*** phschwartz has quit IRC | 23:12 | |
*** gondoi has quit IRC | 23:12 | |
*** WormMan has quit IRC | 23:12 | |
*** carlp has quit IRC | 23:12 | |
*** ahale has quit IRC | 23:12 | |
*** superbobry has quit IRC | 23:12 | |
*** vishy has quit IRC | 23:12 | |
*** nijaba has quit IRC | 23:12 | |
*** kodapa_ has quit IRC | 23:12 | |
*** no`x has quit IRC | 23:12 | |
*** hggdh has quit IRC | 23:12 | |
*** paltman has quit IRC | 23:12 | |
*** GheRivero has quit IRC | 23:12 | |
*** errr has quit IRC | 23:13 | |
*** morellon has quit IRC | 23:13 | |
*** fujin has quit IRC | 23:13 | |
*** laurensell has quit IRC | 23:13 | |
*** ryan_fox1985 has quit IRC | 23:13 | |
*** ogelbukh has quit IRC | 23:13 | |
*** mirrorbox has quit IRC | 23:13 | |
*** markwash has quit IRC | 23:13 | |
*** aurigus has quit IRC | 23:13 | |
*** kpepple has quit IRC | 23:13 | |
*** johnmark has quit IRC | 23:13 | |
*** ashp has quit IRC | 23:13 | |
*** lool has quit IRC | 23:13 | |
*** villep has quit IRC | 23:13 | |
*** DanF has quit IRC | 23:13 | |
*** dotplus has quit IRC | 23:13 | |
*** ivoks has quit IRC | 23:13 | |
*** redconnection has quit IRC | 23:13 | |
*** JesperA has quit IRC | 23:13 | |
*** zul has quit IRC | 23:13 | |
*** jeblair has quit IRC | 23:13 | |
*** doude has quit IRC | 23:13 | |
*** andyandy has quit IRC | 23:13 | |
*** DuncanT has quit IRC | 23:13 | |
*** snowboarder04 has quit IRC | 23:13 | |
*** tjikkun has quit IRC | 23:13 | |
*** blamar has quit IRC | 23:13 | |
*** comstud has quit IRC | 23:13 | |
*** nilsson has quit IRC | 23:13 | |
*** ke4qqq has quit IRC | 23:13 | |
*** dabo has quit IRC | 23:13 | |
*** kodapa has quit IRC | 23:13 | |
*** pfibiger has quit IRC | 23:13 | |
*** tjoy has quit IRC | 23:13 | |
*** hyakuhei has quit IRC | 23:13 | |
*** jasona has quit IRC | 23:13 | |
*** romans has quit IRC | 23:13 | |
*** clayg_ is now known as clayg | 23:13 | |
*** cloudgeek has joined #openstack | 23:13 | |
*** Xenith has joined #openstack | 23:13 | |
*** pasik has joined #openstack | 23:13 | |
*** edolnx has joined #openstack | 23:13 | |
*** ChrisAM1 has joined #openstack | 23:13 | |
*** miclorb_ has joined #openstack | 23:13 | |
*** koolhead17 has joined #openstack | 23:13 | |
*** krow has joined #openstack | 23:13 | |
*** rustam has joined #openstack | 23:13 | |
*** tryggvil_ has joined #openstack | 23:13 | |
*** wariola has joined #openstack | 23:13 | |
*** alekibango has joined #openstack | 23:13 | |
*** dubenstein has joined #openstack | 23:13 | |
*** rods has joined #openstack | 23:13 | |
*** JStoker has joined #openstack | 23:13 | |
*** HugoKuo_ has joined #openstack | 23:13 | |
*** ollie1 has joined #openstack | 23:13 | |
*** map_nw has joined #openstack | 23:13 | |
*** sticky has joined #openstack | 23:13 | |
*** shang has joined #openstack | 23:13 | |
*** odyi has joined #openstack | 23:13 | |
*** obino has joined #openstack | 23:13 | |
*** nid0 has joined #openstack | 23:13 | |
*** floehmann has joined #openstack | 23:13 | |
*** al has joined #openstack | 23:13 | |
*** jsh has joined #openstack | 23:13 | |
*** dgags has joined #openstack | 23:13 | |
*** PiotrSikora has joined #openstack | 23:13 | |
*** AntoniHP has joined #openstack | 23:13 | |
*** chmouel has joined #openstack | 23:13 | |
*** superbobry has joined #openstack | 23:13 | |
*** arun has joined #openstack | 23:13 | |
*** binbash_ has joined #openstack | 23:13 | |
*** Pommi has joined #openstack | 23:13 | |
*** hggdh has joined #openstack | 23:13 | |
*** cdub has joined #openstack | 23:13 | |
*** paltman has joined #openstack | 23:13 | |
*** agy has joined #openstack | 23:13 | |
*** cburgess has joined #openstack | 23:13 | |
*** mu574n9 has joined #openstack | 23:13 | |
*** andyandy_ has joined #openstack | 23:13 | |
*** sloop has joined #openstack | 23:13 | |
*** kerouac has joined #openstack | 23:13 | |
*** GheRivero has joined #openstack | 23:13 | |
*** Lumiere has joined #openstack | 23:13 | |
*** russellb has joined #openstack | 23:13 | |
*** n0ano has joined #openstack | 23:13 | |
*** benner has joined #openstack | 23:13 | |
*** vidd-away has joined #openstack | 23:13 | |
*** martines has joined #openstack | 23:13 | |
*** j^2 has joined #openstack | 23:13 | |
*** datajerk has joined #openstack | 23:13 | |
*** agoddard has joined #openstack | 23:13 | |
*** phschwartz has joined #openstack | 23:13 | |
*** gondoi has joined #openstack | 23:13 | |
*** pquerna has joined #openstack | 23:13 | |
*** cmagina has joined #openstack | 23:13 | |
*** keekz has joined #openstack | 23:13 | |
*** errr has joined #openstack | 23:13 | |
*** kirkland has joined #openstack | 23:13 | |
*** WormMan has joined #openstack | 23:13 | |
*** perlstein has joined #openstack | 23:13 | |
*** rwmjones has joined #openstack | 23:13 | |
*** carlp has joined #openstack | 23:13 | |
*** morellon has joined #openstack | 23:13 | |
*** ahale has joined #openstack | 23:13 | |
*** fujin has joined #openstack | 23:13 | |
*** Aurelgadjo has joined #openstack | 23:13 | |
*** aimka has joined #openstack | 23:13 | |
*** laurensell has joined #openstack | 23:13 | |
*** ryan_fox1985 has joined #openstack | 23:13 | |
*** vishy has joined #openstack | 23:13 | |
*** root_ has joined #openstack | 23:13 | |
*** mencken has joined #openstack | 23:13 | |
*** Aim has joined #openstack | 23:13 | |
*** jbarratt_ has joined #openstack | 23:13 | |
*** ogelbukh has joined #openstack | 23:13 | |
*** nijaba has joined #openstack | 23:13 | |
*** anticw has joined #openstack | 23:13 | |
*** iRTermite has joined #openstack | 23:13 | |
*** dotplus has joined #openstack | 23:13 | |
*** Daviey has joined #openstack | 23:13 | |
*** uvirtbot has joined #openstack | 23:13 | |
*** mirrorbox has joined #openstack | 23:13 | |
*** olafont_ has joined #openstack | 23:13 | |
*** akscram has joined #openstack | 23:13 | |
*** blahee has joined #openstack | 23:13 | |
*** martin has joined #openstack | 23:13 | |
*** cclien has joined #openstack | 23:13 | |
*** termie has joined #openstack | 23:13 | |
*** Vek has joined #openstack | 23:13 | |
*** opsnare has joined #openstack | 23:13 | |
*** cw has joined #openstack | 23:13 | |
*** royh has joined #openstack | 23:13 | |
*** Hunner has joined #openstack | 23:13 | |
*** dendro-afk has joined #openstack | 23:13 | |
*** markwash has joined #openstack | 23:13 | |
*** troytoman-away has joined #openstack | 23:13 | |
*** kodapa_ has joined #openstack | 23:13 | |
*** no`x has joined #openstack | 23:13 | |
*** aurigus has joined #openstack | 23:13 | |
*** kpepple has joined #openstack | 23:13 | |
*** ashp has joined #openstack | 23:13 | |
*** johnmark has joined #openstack | 23:13 | |
*** lool has joined #openstack | 23:13 | |
*** villep has joined #openstack | 23:13 | |
*** DanF has joined #openstack | 23:13 | |
*** ivoks has joined #openstack | 23:13 | |
*** zelazny.freenode.net sets mode: +v dendro-afk | 23:13 | |
*** pixelbeat has joined #openstack | 23:14 | |
*** redconnection has joined #openstack | 23:15 | |
*** JesperA has joined #openstack | 23:15 | |
*** zul has joined #openstack | 23:15 | |
*** jeblair has joined #openstack | 23:15 | |
*** doude has joined #openstack | 23:15 | |
*** andyandy has joined #openstack | 23:15 | |
*** DuncanT has joined #openstack | 23:15 | |
*** snowboarder04 has joined #openstack | 23:15 | |
*** tjikkun has joined #openstack | 23:15 | |
*** blamar has joined #openstack | 23:15 | |
*** comstud has joined #openstack | 23:15 | |
*** nilsson has joined #openstack | 23:15 | |
*** ke4qqq has joined #openstack | 23:15 | |
*** dabo has joined #openstack | 23:15 | |
*** kodapa has joined #openstack | 23:15 | |
*** pfibiger has joined #openstack | 23:15 | |
*** tjoy has joined #openstack | 23:15 | |
*** hyakuhei has joined #openstack | 23:15 | |
*** jasona has joined #openstack | 23:15 | |
*** romans has joined #openstack | 23:15 | |
*** clayg is now known as 15SAADNDC | 23:15 | |
*** Kiall_ has joined #openstack | 23:15 | |
*** MarkAtwood has joined #openstack | 23:15 | |
*** woleium has joined #openstack | 23:15 | |
*** camm has joined #openstack | 23:15 | |
*** webx has joined #openstack | 23:15 | |
*** nelson1234 has joined #openstack | 23:15 | |
*** clayg has joined #openstack | 23:15 | |
*** lucas has joined #openstack | 23:15 | |
*** cloud0_ has joined #openstack | 23:15 | |
*** guaqua has joined #openstack | 23:15 | |
*** medberry has joined #openstack | 23:15 | |
*** Spirilis has joined #openstack | 23:15 | |
*** Eyk^off has joined #openstack | 23:15 | |
*** zz_bonzay has joined #openstack | 23:15 | |
*** JesperA has quit IRC | 23:15 | |
*** JesperA has joined #openstack | 23:15 | |
*** medberry is now known as Guest913 | 23:16 | |
*** Kiall_ is now known as Guest79578 | 23:16 | |
*** Pommi has quit IRC | 23:16 | |
*** Guest79578 has quit IRC | 23:16 | |
*** Guest79578 has joined #openstack | 23:16 | |
*** Guest79578 is now known as Kiall | 23:16 | |
JesperA | Anyone knows if it is possible to make HTTP requests to a Dell EqualLogic array? | 23:19 |
*** phschwartz has quit IRC | 23:22 | |
*** phschwartz has joined #openstack | 23:22 | |
*** krish has joined #openstack | 23:24 | |
krish | hey guys | 23:24 |
*** dailylinux has quit IRC | 23:24 | |
*** Pommi has joined #openstack | 23:28 | |
krish | hi im trying to restart nova network | 23:30 |
krish | and it fails with error | 23:30 |
krish | anyone interested to see a pastie of it ? :) | 23:30 |
coli | what does the error say ? | 23:31 |
*** krish has quit IRC | 23:34 | |
*** debo-os has quit IRC | 23:46 | |
*** zykes- has joined #openstack | 23:54 | |
*** pixelbeat has quit IRC | 23:55 | |
*** bengrue has quit IRC | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!