Wednesday, 2011-07-27

*** floehmann has quit IRC00:02
*** FallenPegasus has quit IRC00:04
*** msinhore has joined #openstack00:08
*** jheiss_ has joined #openstack00:11
*** jheiss has quit IRC00:14
*** jheiss_ is now known as jheiss00:14
*** mfischer has quit IRC00:15
*** rods has quit IRC00:15
*** ldlework has joined #openstack00:17
*** adjohn has quit IRC00:17
*** Jamey has quit IRC00:19
*** ton_katsu has joined #openstack00:22
*** mfischer has joined #openstack00:29
*** GeoDud has joined #openstack00:33
*** blueblade has joined #openstack00:37
*** stanchan has quit IRC00:41
*** rchavik has quit IRC00:43
*** jeffjapan has joined #openstack00:43
*** shentonfreude has joined #openstack00:47
*** rchavik has joined #openstack00:47
*** maplebed has quit IRC00:48
*** jsalisbury has joined #openstack00:53
*** jsalisbury has quit IRC00:55
*** miclorb_ has quit IRC00:57
*** miclorb__ has joined #openstack00:57
notmynameparkerro: yes, there should actually be one entry for each replica of an account. the processing step averages the values it gets00:57
ianweller2300:58
ianwelleroops.00:58
* ianweller goes back into hiding00:58
*** ljl1 has joined #openstack01:01
*** sandywalsh has quit IRC01:06
*** dragondm has quit IRC01:09
*** ejat has quit IRC01:12
*** jdurgin has quit IRC01:16
*** ccc11 has joined #openstack01:16
*** ccc111 has joined #openstack01:18
*** vladimir3p has quit IRC01:19
*** mfischer has quit IRC01:20
*** ccc11 has quit IRC01:20
*** PeteDaGuru has quit IRC01:24
*** worstadmin has joined #openstack01:24
*** vodanh86 has joined #openstack01:29
*** clauden_ has quit IRC01:30
vodanh86Hello, recently, I start up an Instance and seems problems with Instance host name.01:31
vodanh86ubuntu@(none):~$  <--- Host name is none01:31
*** worstadmin_ has joined #openstack01:32
*** worstadmin has quit IRC01:32
*** martine has joined #openstack01:33
vodanh86do you know why?01:34
*** mwhooker has quit IRC01:37
*** osier has joined #openstack01:40
parkerro notmyname: which processing step? is the average in the log_processing_data?01:48
*** shang has quit IRC01:51
*** shang has joined #openstack01:51
*** ton_katsu has quit IRC01:51
*** ton_katsu has joined #openstack01:54
*** bengrue has quit IRC02:00
*** msinhore has quit IRC02:03
*** FallenPegasus has joined #openstack02:05
parkerro notmyname: think we figured it out, new columns started to show up in summary data02:12
*** deshantm_laptop has joined #openstack02:24
*** lorin1 has joined #openstack02:26
*** lorin1 has quit IRC02:38
*** cereal_bars has joined #openstack02:39
*** parkerro has quit IRC02:40
*** msinhore has joined #openstack02:45
*** deshantm_laptop_ has joined #openstack02:55
*** deshantm_laptop has quit IRC02:58
*** deshantm_laptop_ is now known as deshantm_laptop02:58
*** deshantm_laptop has quit IRC03:01
*** deshantm_laptop has joined #openstack03:02
*** shentonfreude has quit IRC03:17
*** shentonfreude has joined #openstack03:22
*** tryggvil has quit IRC03:25
*** msinhore has quit IRC03:34
*** AimanA is now known as HouseAway03:42
*** ike has joined #openstack03:43
*** HouseAway has quit IRC03:44
*** FallenPegasus has quit IRC03:50
*** FallenPegasus has joined #openstack03:51
*** ryker has quit IRC03:51
*** thingee has joined #openstack03:54
*** kashyap has joined #openstack04:01
*** relevant_taco has joined #openstack04:05
*** FallenPegasus has quit IRC04:09
*** thingee has quit IRC04:10
*** msinhore has joined #openstack04:16
*** HowardRoark has quit IRC04:28
*** dirakx has quit IRC04:30
*** matiu_ has joined #openstack04:39
*** matiu has quit IRC04:43
*** fysa has quit IRC04:44
*** fysa has joined #openstack04:47
*** nci has joined #openstack04:51
*** f4m8_ is now known as f4m804:51
*** hadrian has quit IRC04:53
*** matiu_ is now known as matiu04:55
*** vodanh86 has quit IRC05:04
*** martine has quit IRC05:19
*** ike has quit IRC05:29
*** matiu_ has joined #openstack05:30
*** matiu has quit IRC05:30
*** matiu_ is now known as matiu05:31
*** cereal_bars has quit IRC05:32
*** ldlework has quit IRC05:34
*** deshantm_laptop has quit IRC05:36
*** matiu_ has joined #openstack05:41
*** matiu has quit IRC05:45
*** mahendra has joined #openstack05:52
*** Capashen has joined #openstack05:56
*** guigui1 has joined #openstack05:57
*** arun_ has quit IRC06:05
*** arun_ has joined #openstack06:07
*** arun__ has joined #openstack06:09
*** nerens has quit IRC06:09
*** ccc111 has quit IRC06:11
*** kashyap has quit IRC06:11
*** stewart has joined #openstack06:14
*** arun__ has quit IRC06:15
*** ccc11 has joined #openstack06:16
*** ccc11 has quit IRC06:21
*** dirakx has joined #openstack06:26
*** Tribaal_ch has joined #openstack06:35
*** koolhead17|afk has left #openstack06:35
*** dirakx has quit IRC06:35
*** Tribaal_ch has quit IRC06:36
*** Tribaal_ch has joined #openstack06:36
*** miclorb__ has quit IRC06:44
*** miclorb__ has joined #openstack06:48
*** aryan has quit IRC06:48
*** ccc11 has joined #openstack06:56
*** ccc11 has quit IRC07:00
*** reidrac has joined #openstack07:01
*** katkee has joined #openstack07:02
*** rajeshb has joined #openstack07:04
*** rajeshb has quit IRC07:06
*** nerens has joined #openstack07:09
*** tomeff has joined #openstack07:09
*** mgoldmann has joined #openstack07:18
*** mgoldmann has joined #openstack07:18
*** tudamp has joined #openstack07:20
*** vodanh86 has joined #openstack07:22
*** andy-hk has joined #openstack07:24
*** kashyap has joined #openstack07:30
*** ccc11 has joined #openstack07:33
*** nerens has quit IRC07:34
uvirtbot`New bug: #816817 in glance "UnboundLocalError: local variable 'resp_headers' referenced before assignment" [Undecided,New] https://launchpad.net/bugs/81681707:36
*** openpercept_ has joined #openstack07:37
*** jedi4ever has joined #openstack07:38
*** dirakx has joined #openstack07:41
*** nagyz has joined #openstack07:44
HugoKuo__does anyone try to upgrade glance ?07:45
HugoKuo__and using exist image db ?07:45
*** miclorb__ has quit IRC07:59
*** worstadmin_ has quit IRC08:03
*** duker has joined #openstack08:08
*** nijaba has joined #openstack08:13
*** nijaba has joined #openstack08:13
*** miclorb__ has joined #openstack08:19
*** tadhgred has joined #openstack08:20
*** vodanh86 has quit IRC08:21
*** alperkanat has joined #openstack08:26
alperkanathey there.. can someone please help me about this question i've asked at docs.openstack? http://docs.openstack.org/cactus/openstack-object-storage/admin/content/understanding-how-object-storage-works.html#comment-26553099308:27
*** ljl1 has quit IRC08:31
*** ljl1 has joined #openstack08:31
*** miclorb__ has quit IRC08:32
alperkanati can't understand the difference between a device and a partition in terms of openstack since they're almost alike on unix systems..08:32
alperkanatand i can't understand what a ring is composed of despite i've read nearly all the docs for swift08:33
*** mahendra has quit IRC08:33
alperkanatit seems like it's composed of storage node devices and partitions but i can't be sure08:33
alperkanatfor instance i guess we should have at least 3 rings for account, container and objects?08:34
reidraca storage node contains (probably) one zone, with several devices, and partitions assigned to the devices08:35
nagyzhm08:35
nagyzanyone worked with selinux + openstack?08:35
alperkanatbut in my understanding, a zone can be anything due to the logic of your app08:36
reidracthe zones are used to be sure that a partition is always replicated in a different zone08:36
alperkanatand in terms of openstack device is not a unix device like /dev/sdXX08:36
reidracalperkanat: yes, that's true - it can be a datacenter, a whole rack, a single server, etc08:36
alperkanatreidrac: yes; that's another part i didn't understand. how can a zone make sure that a partition is always replicated?08:37
reidracthe swift guarantees you that a partition will be replicated always into a different zone08:37
alperkanatso that means it's replicated for how many zones you have..? then why do we still have to define the replication count?08:38
reidracno, you get as many replicas as you want08:38
alperkanatyou can have more than one replica on each zone?08:39
reidracthe zone idea is used as "point of failure", so you can loss a zone without losing more than one replica08:39
reidracreplica? do you mean replicated partitions? yes, as long as they are from a different zone08:40
reidraclet's say you have 4 zones, 3 replicas, and n devices (that's not relevant for the example)08:40
reidracyou put a file, and it goes into a partition (the files aren't split, btw), inside the ZONE 108:41
reidracthe you're guaranteed to have 2 copies of that file in zones different than zone 108:41
alperkanatwhat is i have 2 zones, 3 replicas?08:42
alperkanats/is/if/08:42
reidracobviously you can't08:42
reidracthe number of zones must be greater or equal to the number of replicas08:43
reidracIMHO the most confusing concept in swift is "partition" because almost everybody assumes properties of a disk partition, and that's not the case08:44
alperkanatyes that's another question in my mind08:44
reidracthe documentation is probably a little bit dense, but I think it's understandable08:45
alperkanatdocs use different terms at different parts so that's a bit confusing08:45
alperkanatindeed it's08:45
alperkanatbut you get lots of questions too :)08:45
alperkanati saw parts about moving partitions away but i knew that it means rsync'ing btw partitions08:46
alperkanatwhat i didn't understand is partition power08:46
alperkanatwhat for and how it's used08:46
reidracthat's the number of partitions; you set it when you create the ring and it can't be changed08:47
reidracso you need to think the max number of partitions you want to have in you cluster08:47
alperkanatafaik hashes are used to locate files without having to consult to a db.. and therefore the partition power could be used i guess08:47
alperkanatyes the docs seem to be for rackspace mentioning about 5000 disks (1 TB) with 100 partitions each :) unbelievable..08:48
alperkanatso if i estimated a small number for partition power, would i have serious problems in the future while scaling?08:49
reidracactually it depends on the disks you're using (size, hw failure rate) and the number of zones and replicas08:49
reidracnot really, that would mean that in the future you'll have bigger partitions08:49
reidracthat may mean more data loss in case of hw failure08:50
*** mnour has joined #openstack08:50
*** matiu_ has quit IRC08:51
reidraclet's say you have 5 zones with 24 disks each one, with 3 replicas; if you loose 3 disks in different zones and they old the 3 copies of a partition, bigger partitions means you lose more data08:52
reidrac*hold the 3 copies08:52
alperkanatso what happens if i have less partition power? afaik it determines the maximum number of partitions a swift cluster may have..08:53
reidracless partitions means bigger partitions, as I said08:54
alperkanatshouldn't that be the opposite? if i had more partitions that i wouldn't lose data?08:54
alperkanatoh i see08:54
reidracmore partitions mean more work for replication, but less data loss in the event of a multiple failure08:54
alperkanatand how does this approximation affect when we have the max number of machines in our cluster?08:55
reidracanyway, statistics are dependent on your hw (ie. is slightly different using 1TB disks than 3TB disks)08:55
reidracthere's no max number of machines, you can add as many nodes as you want08:56
reidracbut you can make a wild guess... do you expect to have more than 6 zones? :)08:56
alperkanati always thought zones as data centers for general purpose08:57
*** jeffjapan has quit IRC08:57
alperkanatso i guess not :)08:57
alperkanati think i'll go with 2 zones for starters.. probably different racks08:59
alperkanatbut i want to make sure that we won't have scaling problems just because i made the wrong approximations.. so trying to understand how each of those parameters we talked about affect swift09:00
reidracyou need to read the documentation, if you're going to use the recommended number of replicas (3) you can't have 2 zones09:01
*** nerens has joined #openstack09:04
*** winston-d has quit IRC09:05
*** duker has quit IRC09:06
*** tomeff1 has joined #openstack09:08
*** tomeff1 has quit IRC09:11
*** alekibango has joined #openstack09:11
*** duker has joined #openstack09:11
*** tomeff has quit IRC09:12
*** tomeff1 has joined #openstack09:12
*** darraghb has joined #openstack09:13
*** rzulf has joined #openstack09:13
*** tryggvil has joined #openstack09:14
*** rzulf has left #openstack09:16
*** Faddy has joined #openstack09:16
alperkanatis there an easy way of creating 100 partitions per drive and add it to swift? since building the ring you have to "carefully" add each of them with the correct ip, zone and port information..09:24
reidracthe partitions are referred to the ring, it's not related to disk partitions09:26
alperkanat?09:26
reidracI recommend you to follow the instructions to setting up a SAIO cluster and go through the documentation to help you understand how it works09:27
alperkanati did create a small one expanded to 6 vms.. however i didn't understand how disk partitions are not that related to the ring because we obviously specify them with a command like swift-ring-builder <builder_file> add z<zone>-<ip>:<port>/<device_name>_<meta> <weight>09:29
*** irahgel has joined #openstack09:31
uvirtbot`New bug: #816866 in swift "Domain_remap Web-Index" [Undecided,New] https://launchpad.net/bugs/81686609:31
*** andy-hk has quit IRC09:32
*** tryggvil has quit IRC09:32
*** dobber has joined #openstack09:35
*** willaerk has joined #openstack09:48
*** tryggvil has joined #openstack09:51
*** cattarhine is now known as dayeyes`09:54
*** dayeyes` is now known as catarrhine09:54
*** catarrhine has quit IRC09:58
*** catarrhine has joined #openstack09:58
tadhgredglance question: I have glance  configured to use Swift . I notice that the glance details command spits out the Swift password to stdout. Is this intentional?10:00
tadhgredswift://glance:glance:NOTTHEPASSWORD@https://192.168.10.10/auth/v1.10:01
*** dobber has quit IRC10:03
*** dobber has joined #openstack10:04
*** ton_katsu has quit IRC10:15
*** anm3rt has joined #openstack10:18
*** thickskin has quit IRC10:19
*** thickskin has joined #openstack10:19
anm3rtHi! is it correct to ask here a question concerned installation?10:21
anm3rtis anybody here? :)10:28
*** cixie has joined #openstack10:28
*** irahgel has left #openstack10:33
*** ljl1 has quit IRC10:35
TREllisanm3rt: just ask the question :)10:47
*** ccc11 has quit IRC10:49
*** dirakx has quit IRC10:58
anm3rtTREllis, I try to configure swift and auth (cactus release) - do all according manuals but when try to make swauth_prep - it hanges and seemingly do nothing. Netstat reveals two new connections, no log10:59
anm3rtjust don`t know where to dig11:03
*** rods has joined #openstack11:11
TREllisanm3rt: paste the swauth_prep command you are running11:12
*** markvoelker has joined #openstack11:12
anm3rtTREllis, swauth-prep -K passwd_with_dash_and_numbers -A http://127.0.0.1:8080/auth/11:14
TREllisanm3rt: try https11:15
*** nerens has quit IRC11:15
anm3rtoh my...11:15
*** dirakx1 has joined #openstack11:15
TREllisanm3rt: working?11:15
anm3rttnanks!11:15
TREllisnp11:15
*** smaresca has quit IRC11:16
anm3rtalmost :) will check conf now11:16
*** daedalusflew has quit IRC11:16
*** daedalusflew has joined #openstack11:16
*** smaresca has joined #openstack11:16
*** nerens has joined #openstack11:19
*** anm3rt has left #openstack11:31
*** nerens has quit IRC11:49
*** mfer has joined #openstack11:51
*** mdaubs has joined #openstack12:02
*** mdaubs has quit IRC12:03
*** mdaubs has joined #openstack12:03
*** sandywalsh has joined #openstack12:04
*** martine has joined #openstack12:05
tadhgredglance question: I have glance  configured to use Swift . I notice that the glance details command spits out the Swift password to stdout. Is this intentional?12:09
tadhgredswift://glance:glance:NOTTHEPASSWORD@https://192.168.10.10/auth/v112:10
*** dirakx1 has quit IRC12:16
*** hadrian has joined #openstack12:22
*** bsza has joined #openstack12:22
*** bsza has quit IRC12:24
*** bsza has joined #openstack12:24
*** Flint has quit IRC12:25
*** ctennis has quit IRC12:28
*** uksysadmin has joined #openstack12:30
*** mnour has quit IRC12:31
*** mnour1 has joined #openstack12:31
*** dirakx has joined #openstack12:33
*** mfer has quit IRC12:34
*** mfer has joined #openstack12:36
*** msinhore has joined #openstack12:38
*** huslage has joined #openstack12:39
*** ctennis has joined #openstack12:42
*** ctennis has joined #openstack12:42
*** dirakx has quit IRC12:42
*** chomping has quit IRC12:43
*** msinhore has quit IRC12:45
*** lorin1 has joined #openstack12:47
alperkanatcan someone please tell me what a partition means for openstack? i guess it doesn't correspond to a real disk partition12:47
*** msivanes has joined #openstack12:47
*** daedalusflew has quit IRC12:48
*** smaresca has quit IRC12:48
*** caribou has joined #openstack12:49
*** msinhore has joined #openstack12:49
*** aliguori has joined #openstack12:53
*** duker has quit IRC12:55
*** duker has joined #openstack12:55
*** marrusl has joined #openstack12:56
*** guigui1 has quit IRC12:58
notmynamealperkanat: for swift?12:58
*** ryker has joined #openstack12:59
alperkanatnotmyname: yes12:59
notmynamealperkanat: a partition is a division of the logical keyspace. objects are mapped to a partition. partitions are assigned to storage volumes (all handled with the ring)13:00
*** dirakx has joined #openstack13:00
alperkanatnotmyname: i shouldn't think partitions as chunks of objects, right? the data is just partitioned into smaller sets without really being split13:02
notmynamealperkanat: correct. objects are atomic entities within swift (but it's possible to tie together multiple objects into one larger object with a manifest object)13:03
alperkanathmm so we have another type called manifest object like a container?13:04
alperkanatnotmyname: can objects be pushed into different partitions despite they're in the same container? (so a container keeps track of partitions?)13:05
*** marrusl has quit IRC13:05
notmynamealperkanat: accounts, containers, and objects (ie every entity in swift) are all mapped to the appropriate storage node in the same way (find partition, then find storage volume)13:07
*** ccc11 has joined #openstack13:07
notmynameso yes, objects in a single container are on different partitions13:07
notmynamea container is a "higher" level of abstraction than a partition13:08
notmynameswift exposes account/container/object to the user and internally uses the partitions to know where to find an entity13:08
alperkanatnotmyname: thanks for the clarifications -- i've seen your nick at launchpad; i believe you're a dev13:08
alperkanatkeep up the good work! :)13:09
*** mdaubs has quit IRC13:09
notmynamealperkanat: for a high-level overview of manifest objects, see http://programmerthoughts.com/openstack/the-story-of-an-openstack-feature/13:09
alperkanatnotmyname: i haven't seen any reference to manifest objects in cactus docs.. is it something new in the upcoming release?13:10
notmynameno, it's in cactus13:11
*** ameade has joined #openstack13:11
alperkanatnotmyname: hmm so it's used for large objects? because that's one of the sections i haven't read yet :)13:11
*** Shentonfreude has joined #openstack13:12
alperkanati mostly read the deployment details and the logic behind its architecture13:12
*** mdaubert has joined #openstack13:12
notmynameit's the only way in swift to store objects larger than the single-object cap (5GB)13:12
*** Faddy has left #openstack13:13
alperkanatyes i'm surprised to see such a difference.. will read more about it but does that mean we will see more types (as in rdbms) as we go?13:13
notmynamenot sure I follow...13:14
alperkanatallright13:15
alperkanatthanks!13:15
*** dirakx has quit IRC13:16
alperkanatare there any monitoring tools for swift btw? or is it left to sysadm?13:16
notmynamealperkanat: it's left to sysadm. we can give guidance on some things to monitor, but there is no built-in monitoring13:18
*** mfer has quit IRC13:19
alperkanatgot another question.. afaik we should have at least 3 rings for accounts, objects and containers.. can we have more? if so, what would they be used for?13:19
*** whitt has quit IRC13:19
*** primeministerp has quit IRC13:19
alperkanatnotmyname: i just saw in launchpad tickets that proxy returns 507 headers when a disk is full.. that's why i asked that question..13:20
*** mfer has joined #openstack13:20
notmynamealperkanat: no. one ring for accounts, one for containers, one for objects13:20
*** mnour1 has quit IRC13:20
*** primeministerp has joined #openstack13:20
*** mnour has joined #openstack13:20
alperkanatnotmyname: hmm the docs gave me the impression that we could have more rings like custom rings -- so that's why i didn't understand what a ring is really mean to swift13:21
*** lts has joined #openstack13:22
*** llang629 has joined #openstack13:22
*** llang629 has left #openstack13:22
notmynamethe ring is the mapping of partition to storage volumes.13:22
notmynamefor example (pseudo code only): list_of_storage_nodes = ring.lookup(account, container, object)13:23
*** mnour has quit IRC13:23
*** mnour has joined #openstack13:24
*** mfer has quit IRC13:24
*** bcwaldon has joined #openstack13:24
*** mfer has joined #openstack13:24
*** BuZZ-T has quit IRC13:25
*** BuZZ-T has joined #openstack13:27
*** BuZZ-T has joined #openstack13:27
*** gaitan has joined #openstack13:30
*** dirakx has joined #openstack13:30
*** ctennis has quit IRC13:34
*** mnour has quit IRC13:35
*** mnour has joined #openstack13:35
alperkanati read that raid is not suggested with swift.. meaning software or hardware raid?13:36
*** mnour has quit IRC13:36
*** mnour1 has joined #openstack13:37
alperkanati believe it's software raid that's not recommended13:37
*** guigui has joined #openstack13:37
notmynamealperkanat: either13:41
alperkanatnotmyname: what's wrong with hardware raid?13:42
*** ctennis has joined #openstack13:42
*** ctennis has joined #openstack13:42
notmynameRAID 5/6 has worst case performance with small, random reads/writes. swift does almost exclusively small, random reads/writes. also, RAID rebuild times for large volumes is unacceptable13:43
*** amccabe has joined #openstack13:43
*** kashyap has quit IRC13:43
*** huslage has quit IRC13:44
*** dirakx has quit IRC13:45
alperkanatallright thanks13:45
*** f4m8 is now known as f4m8_13:45
*** primeministerp has quit IRC13:50
*** tryggvil__ has joined #openstack13:50
*** worstadmin_ has joined #openstack13:51
*** tryggvil has quit IRC13:53
*** tryggvil__ is now known as tryggvil13:53
*** amccabe has left #openstack13:54
*** osier has quit IRC13:54
*** whitt has joined #openstack13:55
*** marrusl has joined #openstack13:55
*** marrusl_ has joined #openstack13:57
*** guigui has quit IRC13:58
*** dirakx1 has joined #openstack13:59
*** marrusl has quit IRC14:01
*** markvoelker has quit IRC14:02
*** lborda has joined #openstack14:04
*** marrusl_ has quit IRC14:07
*** Ephur has joined #openstack14:07
*** guigui has joined #openstack14:08
*** marrusl_ has joined #openstack14:09
*** ldlework has joined #openstack14:09
*** primeministerp has joined #openstack14:15
*** jkoelker has joined #openstack14:19
*** huslage has joined #openstack14:20
*** marrusl__ has joined #openstack14:23
*** alandman has joined #openstack14:24
*** marrusl_ has quit IRC14:25
*** rchavik has quit IRC14:26
*** kbringard has joined #openstack14:26
*** tomeff has joined #openstack14:32
*** dirakx1 has quit IRC14:33
*** marrusl_ has joined #openstack14:35
*** tomeff1 has quit IRC14:37
*** marrusl__ has quit IRC14:39
*** marrusl_ has quit IRC14:40
*** vladimir3p has joined #openstack14:41
*** katkee has quit IRC14:45
*** dragondm has joined #openstack14:46
*** lborda has quit IRC14:52
*** yLothar has joined #openstack14:53
*** FallenPegasus has joined #openstack14:53
*** mdaubert has quit IRC14:54
*** dirakx1 has joined #openstack14:57
*** lborda has joined #openstack14:57
*** openpercept_ has quit IRC14:58
*** guigui has quit IRC14:58
*** reed has quit IRC14:59
*** mgoldmann has quit IRC14:59
huslagehardware raid is boring15:00
Glaceehuslage: indeed15:00
huslageand a waste of money in openstack's model15:00
huslageand probably slows things down15:00
*** Capashen has quit IRC15:00
*** ddutta has quit IRC15:01
huslagewho at dell wants a pilot customer for their cloud-in-a-box thing?15:01
Glaceehehe your interested in that?15:01
kbringardif you have the hardware you can get the code for free15:01
kbringardit's on github15:01
huslagei have a budget, want to buy hardware.15:02
kbringardor are you looking for them to send you some demo hardware?15:02
*** worstadmin_ has quit IRC15:02
huslageno. i want them to send me hardware that i pay for15:02
huslagei don't want free stuff15:02
huslagei mean, i do, but not in this instance15:02
kbringardhaha, fair enough15:02
*** HowardRoark has joined #openstack15:04
*** dendrobates is now known as dendro-afk15:05
uvirtbot`New bug: #817032 in glance "glance-control exits with 0 when glance-<server> script is not found" [Undecided,New] https://launchpad.net/bugs/81703215:06
huslagekbringard: i'm building a cloud for a university in canada. we've got a grant for infrastructure. i've prototyped it on existing hardware and now it's time to get the real thing done.15:06
creihtalperkanat: We should probably refine the docs to say that parity raid is not recommended15:06
kbringardmakes sense... I'm sure Dell's thing works well since it'll push their hardware for them15:07
creihtyou might for example want to use something like raid 10 at the container/account level15:07
huslagecreiht: to what end?15:08
kbringardhuslage: I mostly use HP enclosures and they work quite well15:08
huslagekbringard: blades?15:08
kbringardyessir15:08
creihthuslage: for example, we have been testing using SSDs in a raid 10 for just account/container DBs to boost performance at that layer15:08
huslagei've never bought into blades for some reason15:08
huslagecreiht: seems like overkill, but would be ok i guess.15:08
creihtraid 10 gives you a little more speed, plus extra redundancy due to having a fewer number of machines storing the container/account dbs15:09
kbringardthey seem to work well for this... hardwired backplane switch for eth0 and eth1 makes inter-node communication super fast without the need for "extra" switching gear15:09
*** dspano has joined #openstack15:09
creihtthe container/account db opperations are extremely disk IO bound15:09
huslagetrue kbringard15:09
*** resker has joined #openstack15:09
kbringardbut, I've also never tried anything else... that's what I was given so that's what I used, hehe15:09
huslageit'd be nice to see diagrams of how people do things.15:10
kbringardI'm not allowed to share mine, but I can describe it to you15:10
huslagesure15:10
kbringardthe architecture isn't what's "protected", but the diagrams have a lot of proprietary info in them15:10
huslagei think i have a good idea of how to do stuff, but it's always good to hear how people do stuff15:11
creihthuslage: by doing this, performance stays good even with very large containers (we tested with 1 billion objects in one container :))15:11
huslagecool15:11
huslagecreiht: where do you work?15:11
creihthuslage: rackspace :)15:12
huslageoh that's right15:12
huslageyou told me that already15:12
creihthehe15:12
* huslage slow on the uptake15:12
alperkanatcreiht: that explains the ssd's you're using on the servers :) i haven't heard much startups having ssd's in their servers :P15:13
huslagei dig SSDs15:13
*** ianloic has quit IRC15:14
creihtalperkanat: heh... we are still in testing... SSDs are proving to be a bit tempramental15:14
*** ianloic has joined #openstack15:14
creihtand we wouldn't use SSDs for actual object storage15:14
*** rajeshb has joined #openstack15:14
huslageyou must respect the SSD :)15:14
creihteither way, even if you don't use SSDs our testing is indicating that it is very worth while running account/container servers separate from object servers15:14
*** dolphm has joined #openstack15:15
alperkanati can't believe that account/container servers are more I/O intensive than object servers btw.. in the end the objects are the things you're downloading?15:15
alperkanatcreiht: most of the docs says rackspace uses all servers on nodes :) i guess that's old story15:16
*** resker has quit IRC15:16
creihtalperkanat: that is true currently15:16
creihtwe are still testing15:16
creihtbut experience so far is indicating that it is better to pull those out15:16
alperkanati see15:16
creihtI should update the docs :)15:16
*** ryker1 has joined #openstack15:16
*** jjm has joined #openstack15:17
*** relevant_taco has quit IRC15:17
alperkanatcreiht: you're taking them out to other zones or just other servers?15:17
notmynamecreiht: sounds like a swift team problem ;-)15:17
*** ryker has quit IRC15:17
creihtalperkanat: the reason they are so I/O intensive is due to them being small sqlite db's and lots of operations require a hit to the db15:17
*** adjohn has joined #openstack15:17
creihtalperkanat: to other servers15:17
creihtnotmyname: lol15:17
creihtnotmyname: there will always be a little part of swift inside me15:17
creiht:)15:17
huslagecreiht: why are they sqlite dbs?15:17
huslagethat seems inefficient15:18
alperkanathuslage: i think not15:18
huslagejust curious15:18
huslagestill haven't done much on the swift side.15:18
creihthuslage: well containers/accounts are just listings15:18
huslageyeah, true.15:18
huslagekey-value paris15:18
huslagepairs too15:18
alperkanathuslage: https://tlohg.wordpress.com/2011/03/31/the-story-of-swift/15:18
creihtwhich require queries that are good for a db15:19
creihtyeah that is a good post from another co-worker15:19
creihtsqlite dbs are nice since they are just files15:19
creihtwe worked out a way to replicate them pretty well15:19
creihtand allows us to basically distrbute them accross the entire cluster15:19
huslagemake sense15:20
alperkanatcreiht: is segmenting large files into chunks left to the programmer or the client side libraries do it for us? i haven't looked at the libraries yet15:20
huslagethey are just objects in the system then?15:20
creihtalperkanat: currently I don't think the client side libs auto chunk15:20
notmynamealperkanat: it's left up to client libraries15:20
alperkanatcreiht: it's funny though to copy *.gz files around :) rsyncing them automatically would be nice15:21
creihtthey are able to upload the manifest, and then you can upload the chunks15:21
creihthuslage: kinda, they are a special case of objects15:21
creihtwell that is probably a bad way to describe them15:21
creihtthere are a lot of similarities15:21
creihtalperkanat: are you talking about the rings?15:21
alperkanatcreiht: this means i have to upload a user's file to a temporary place, split that into chunks and start an upload process which will eventualy result as a latency on user side to get their hands on the uploaded file..15:22
alperkanatcreiht: yes15:22
creihtalperkanat: yeah we have talked about how we could make that more automatic, but everyone has different ideas how that should be done :)15:22
creihtalperkanat: our ops guys generate the rings on one machine, then have a script that pushes the ring out to all the nodes15:23
notmynamealperkanat: why not stream it from the user and after X bytes start a new stream?15:23
alperkanatcreiht: talking about 2 subjects at a time :) for which one are you talking about? splitting into chunks or sending *.gz's around? :)15:23
creihtalperkanat: ring files15:23
*** stewart has quit IRC15:24
creihtalperkanat: the swift command line tool will auto chunk a file15:24
creihtto see an example of how it can be done15:24
*** dgags has joined #openstack15:24
alperkanatcreiht: as far as i see you guys suggest to maintain rings on a single machine and distribute it to others.. so i'd make a list of all machines (even the new ones) and rsync the *.gz files - sounds like an easy bash script :P15:24
creihtalperkanat: yeah that is pretty much it15:25
alperkanatnotmyname: not sure about the details15:25
*** mnour has joined #openstack15:25
*** mnour1 has quit IRC15:25
notmynamealperkanat: check out the swift command line tool (like creiht said) for an example of how it could be done client-side15:26
*** mfischer has joined #openstack15:26
alperkanatcreiht: i've read that part of the docs but i'd like to use a client side library (python/php) in order to do it..15:26
creihtalperkanat: yeah the swift command line tool is written in python :)15:26
alperkanat:) ops15:26
alperkanatcreiht: i sometimes forget that it's open source15:26
creihthehe15:26
creihtalperkanat: adding auto chunking to the client libraries is on a list, but that list keeps growing :)15:27
*** reidrac has left #openstack15:27
alperkanati thought that cactus is 1.2.x btw but when you download from the link (latest) it's 1.3.x -- the development (diablo) release seems to be 1.4.x?15:27
alperkanatcreiht: you guys hiring? :p come on you're rackspace!15:28
creihtalperkanat: always :)15:28
mfischerrackertalent.com15:28
mfischer<-- racker15:29
alperkanati hope you guys would migrate to github completely one day :) how sync'd they're (launchpad vs github) btw?15:29
ttxnotmyname: swift 1.4.2 almost fully released now, pending package copy to release PPA15:29
*** rook][ has joined #openstack15:29
notmynamettx: thanks15:29
alperkanatcreiht: i'm an arch linux guy btw.. do you think there are some ubuntu specific parts in openstack that's hard to maintain for packagers?15:30
ttx(in case you want to pimp it in your OSCON talk :)15:30
alperkanati really dislike ubuntu and want to use arch wherever possible15:30
creihtalperkanat: for swift, shouldn't be15:30
creihtI can't speak for the other projects15:30
ttxalperkanat: I don't think there is anything ubuntu-specific anywhere15:30
alperkanathmm ok. for starters i could try swift15:30
mfischerwhat might be "ubuntu specific" ?15:31
alperkanatmfischer: init scripts etc..15:31
creihtyeah, people have been making RPMs for all of openstack15:31
creihtalperkanat: hah... I think we get bashed by ubuntu how we are currently doing init stuff (in swift) :)15:31
creihtor at least debian15:32
creihtthat part could be better ;)15:32
alperkanat:)15:32
notmynamecreiht: could be /different/15:32
notmyname;-)15:32
creihtlol15:32
mfischerI'd be more concerned if there were runtime dependencies on closed-source shared objects, not sure we have that problem15:32
*** FallenPegasus has quit IRC15:32
mfischerthe other stuff is trivial to port to $DISTRO15:32
notmynamemfischer: no15:32
alperkanathow sync'd is github and launchpad btw? if i checkout from github, would have i the latest code?15:32
creihtalperkanat: swift has very few dependencies, so it should be pretty easy15:33
notmynamemfischer: you can also see our (Rackspace Cloud Files) packaging for a starting point https://github.com/crashsite/swift_debian15:33
alperkanatcreiht: yes i'm gratefull for that15:33
mfischerthanks notmyname15:33
notmynamealperkanat: I've been doing my best to keep the github mirror up to date. at worst it's only a commit or two behind15:33
alperkanatnotmyname: good to know that.. any plans for the future to only use one of them?15:34
huslageisn't there some sort of post-commit hook in BZR that could sync with github?15:34
notmynamealperkanat: there are openstack plans to move to github eventually. for now the swift mirror is "unofficial"15:34
notmynamealperkanat: just checked. the github mirror is up to date15:35
alperkanatok thanks15:35
mfischercan git truly replicate bzr's history?15:35
mfischerjust curious as to what we lose if we ever cut over15:36
notmynamemfischer: no history is lost15:36
mfischercool15:36
creihtmfischer: yeah you can see all the history in the current github mirrors15:36
huslageyour recruiters at rackspace aren't very good.15:37
mfischerI hope GitHub vastly improves their notification system before we fully transition to them15:37
mfischerhuslage, /msg me your info15:38
creihthuslage: heh... umm... I probably shouldn't say anything about that :/15:38
huslagelol15:39
*** relevant_taco has joined #openstack15:41
*** lorin1 has quit IRC15:41
*** mattray has joined #openstack15:43
*** reed has joined #openstack15:43
*** mnour has quit IRC15:45
*** mnour has joined #openstack15:45
*** wiz561 has joined #openstack15:46
wiz561hi! i'm new to cloud computing and have some very basic questions about how it all works.  is there anybody out there that can help?15:46
kbringardwiz561: probably ;-)15:47
kbringardgenerally you can just ask and someone will eventually see it and if they can help will respond15:47
wiz561thanks.  i'll give it a shot.  I started with eucalyptus but switched to openstack after hearing about canocial moving towards it.  i still have unanswered questions...  so here we go15:48
wiz561say I have my cloud controller setup and a quad-core node controller.  Can I run 8 virtual machines at the same time if I wanted to?  A related question is if I have two node controllers, will the load be 'balanced' across both nodes?15:49
mfischerthe question is not whether they will run, but how well15:49
mfischers/not/rarely/15:50
wiz561ok, so you can pretty much have a number of virtual machines that aren't dependent on the cores (cpu's).  it just will run slowly15:50
mfischersubject to memory and disk space, yeah15:50
*** rajeshb has quit IRC15:50
tadhgredglance question: I have glance  configured to use Swift . I notice that the glance details command spits out the Swift password for the glance use to stdout. Is this intentional?15:50
wiz561ok.  thanks.  so if you have two nodes, will a virtual machine load balance, or take advantage of the second node automatically?15:51
kbringardwiz561: yea, as for the balancing, last I recall it chooses compute nodes at random15:51
*** lborda has quit IRC15:51
wiz561ah, ok.  thanks15:51
*** huslage has quit IRC15:51
mfischerwiz561: the compute fabric has no knowledge of the load pattern of its tenants.15:51
*** ccc11 has quit IRC15:52
mfischerIOW, in a tiny deployment like that, 2 busy tenants could be placed on different compute nodes, or they may, by the luck of the draw, end up on the same node15:52
alperkanatif i clone the latest swift do i get swauth? or should i clone swauth or keystone? (is auth server mandatory btw?)15:52
wiz561mfischer: OK.  so if I understand you correctly, if the virtual machine utilization is at 100%, it will not distribute that load across other nodes; you're pretty much stuck with it.15:53
kbringardgenerally if I want to test something on a specific compute node I just fire up like 10 instances and generally at least one will land on the node I want, hah15:53
kbringardI think the new scheduler has some stuff to help with that though... not resource wise but "capability" wise15:53
mfischerwiz561: not automatically, no15:54
wiz561ok, thanks all.  next question...  if your cloud controller goes down, does the entire cloud goes down?15:55
kbringardyou could manually remove it from the pool... the VMs will still run, route, etc just no new Vms will be sent there15:55
mfischerit should not15:55
*** jedi4ever_ has joined #openstack15:55
kbringardbut that's a jankey solution, imo15:55
mfischerVMs that are up should stay up15:55
*** jedi4ever_ has quit IRC15:55
*** mdomsch has joined #openstack15:55
kbringardmfischer wiz561: it should be noted however that if your network controller goes down, they won't route15:55
creihtalperkanat: there is a very basic version of auth include in swift15:55
creihttempauth or whatever it is called now15:56
kbringardbut they will stay up and running, and should start routing again when the network controller comes back15:56
*** adjohn has quit IRC15:56
mfischerkbringard: yeah, that scares the bejeezus out of me15:56
kbringardhaha, indeed15:56
mfischerthis is for flatDHCP right?15:56
wiz561ok.  i think i understand it.  i was thinking of having everything on one box and then have the nodes on other boxes.  but it sounds like the drawback would be less redundancy if something goes down.  it makes sense.15:56
creihtalperkanat: it is going to depend on how you are going to want to run it in production to determine what auth to use15:56
kbringardthere is code that is almost done, iirc, to add HA to the network controller15:57
kbringardand there are ways to do it already15:57
alperkanatcreiht: is it mandatory to have an auth server? (say i want to have a private swift behind a private network that doesn't need an authentication) i believe it's mandatory since it requires tokens15:57
kbringardhttp://unchainyourbrain.com/openstack/13-networking-in-nova15:57
kbringardvishy wrote a nice summary of it15:57
mfischerthat is a really freaking great page BTW15:57
mfischerthanks vishy15:57
kbringardindeed15:57
creihtalperkanat: well the auth middleware is what validates the token, so I think as long as the account is created, you could run without the middleware and run unauthenticated15:58
mfischerOpenStack will be all the better for great docs like this, especially if we can centralize them15:58
wiz561i think that pretty much explains everything pretty well for now.  i pulled up that link and it looks like there's a lot of info on it other than networking.  thank you for the information and answering all my easy questions!!15:58
kbringardwiz561: no problem, we like to help15:58
wiz561thanks!15:58
*** jedi4ever has quit IRC15:58
*** wiz561 has left #openstack15:58
alperkanatcreiht: still i need to use u/p which requires authentication? i think unauthenticated requests are rejected by the proxy15:59
*** uksysadmin has quit IRC16:00
*** tudamp has left #openstack16:01
*** jsalisbury has joined #openstack16:08
*** mfischer has quit IRC16:09
*** markvoelker has joined #openstack16:10
uvirtbot`New bug: #817079 in nova "OSAPI v1.1 image create should be a server action" [Undecided,In progress] https://launchpad.net/bugs/81707916:12
uvirtbot`New bug: #817082 in nova "OSAPI v1.1 servers metadata does not match the spec" [Undecided,In progress] https://launchpad.net/bugs/81708216:12
*** dendro-afk is now known as dendrobates16:14
*** ddutta has joined #openstack16:14
*** markvoelker has quit IRC16:14
*** Ephur has quit IRC16:15
*** thingee has joined #openstack16:17
*** morfeas has joined #openstack16:17
*** dobber has quit IRC16:18
*** thingee1 has joined #openstack16:19
*** thingee1 has left #openstack16:20
*** thingee has quit IRC16:21
*** markvoelker has joined #openstack16:21
*** parkerro has joined #openstack16:22
*** morfeas has quit IRC16:24
*** czajkowski has quit IRC16:24
*** morfeas has joined #openstack16:24
*** hggdh has quit IRC16:25
*** czajkowski has joined #openstack16:26
*** nagyz has quit IRC16:26
*** czajkowski has quit IRC16:26
*** jdurgin has joined #openstack16:30
*** czajkowski has joined #openstack16:34
*** hggdh has joined #openstack16:36
*** mdomsch has quit IRC16:36
*** marrusl has joined #openstack16:39
*** Tribaal_ch has quit IRC16:43
*** mnour has quit IRC16:44
*** mnour1 has joined #openstack16:44
*** nelson____ has quit IRC16:45
*** nelson____ has joined #openstack16:45
*** alperkanat has quit IRC16:45
*** kashyap has joined #openstack16:46
*** cereal_bars has joined #openstack16:47
*** willaerk has quit IRC16:48
*** rupakg has joined #openstack16:51
*** ejat has joined #openstack16:51
*** ejat has joined #openstack16:51
*** FallenPegasus has joined #openstack16:51
uvirtbot`New bug: #817107 in nova "Exceptions should be united and exception.wrap_exception should be updated or removed" [Undecided,New] https://launchpad.net/bugs/81710716:51
*** uvirtbot` is now known as uvirtbot16:52
*** mnour1 has quit IRC16:53
*** mnour has joined #openstack16:53
*** marrusl has quit IRC16:55
*** ryker has joined #openstack16:56
*** ryker1 has quit IRC16:56
*** maplebed has joined #openstack16:57
*** koolhead11 is now known as kooolhead11|afk17:02
*** relevant_taco has quit IRC17:02
*** FallenPegasus has quit IRC17:04
*** relevant_taco has joined #openstack17:05
*** dijenerate has quit IRC17:06
uvirtbotNew bug: #817115 in swift "swift tool (st) compares an integer-typed object size to a string-typed segment size on upload" [Undecided,New] https://launchpad.net/bugs/81711517:06
*** mfischer has joined #openstack17:06
*** mfischer has quit IRC17:07
*** mfischer has joined #openstack17:10
*** andy-hk has joined #openstack17:14
*** ccustine has joined #openstack17:14
*** dendrobates is now known as dendro-afk17:16
*** joearnold has joined #openstack17:16
*** adjohn has joined #openstack17:17
*** mnour has quit IRC17:19
uvirtbotNew bug: #817121 in glance "URI parsing still problematic" [High,In progress] https://launchpad.net/bugs/81712117:21
*** relevant_taco has quit IRC17:24
*** worstadmin has joined #openstack17:24
*** alandman has quit IRC17:27
*** rupakg has quit IRC17:31
*** hingo has joined #openstack17:31
*** rupakg has joined #openstack17:32
*** FallenPegasus has joined #openstack17:32
*** dijenerate has joined #openstack17:32
*** joearnold has quit IRC17:33
*** joearnold has joined #openstack17:33
*** ryker1 has joined #openstack17:35
*** ryker has quit IRC17:35
*** tryggvil has quit IRC17:37
*** koolhead17 has joined #openstack17:39
*** kashyap has quit IRC17:40
*** kashyap has joined #openstack17:40
*** FallenPegasus has quit IRC17:47
*** FallenPegasus has joined #openstack17:48
*** darraghb has quit IRC17:54
*** mrjazzcat has quit IRC17:55
*** pguth66 has joined #openstack17:56
*** worstadmin has quit IRC17:56
*** mrjazzcat has joined #openstack18:02
*** stanchan has joined #openstack18:09
*** ejat has quit IRC18:10
*** cereal_bars has quit IRC18:11
*** huslage has joined #openstack18:14
*** ejat has joined #openstack18:14
*** ejat has joined #openstack18:14
*** huslage_ has joined #openstack18:14
*** huslage has quit IRC18:14
*** huslage_ is now known as huslage18:14
*** hingo has quit IRC18:15
*** aamonten has joined #openstack18:17
*** huslage has quit IRC18:17
*** huslage has joined #openstack18:17
*** marrusl has joined #openstack18:19
*** hingo has joined #openstack18:20
*** bsza has quit IRC18:20
*** bsza has joined #openstack18:22
*** ejat has quit IRC18:22
*** marrusl has quit IRC18:24
*** mwhooker has joined #openstack18:26
*** yLothar has quit IRC18:28
*** lborda has joined #openstack18:35
*** mwhooker has quit IRC18:35
*** mwhooker has joined #openstack18:36
*** bcwaldon has quit IRC18:38
*** AhmedSoliman has joined #openstack18:39
*** jkoelker has quit IRC18:40
*** ryker has joined #openstack18:40
*** bcwaldon has joined #openstack18:40
*** jkoelker has joined #openstack18:40
*** bcwaldon_ has joined #openstack18:41
*** bcwaldon_ has quit IRC18:41
*** bcwaldon_ has joined #openstack18:42
*** hadrian has quit IRC18:42
*** kashyap has quit IRC18:44
*** ryker1 has quit IRC18:44
*** marrusl has joined #openstack18:44
*** bcwaldon has quit IRC18:44
*** duffman has quit IRC18:45
*** tjikkun has joined #openstack18:45
*** tjikkun has joined #openstack18:45
*** duffman has joined #openstack18:45
*** kernelfreak has joined #openstack18:54
*** stewart has joined #openstack18:55
*** clauden has joined #openstack18:58
*** sriramkr has joined #openstack18:58
*** fabiand__ has joined #openstack19:01
*** AimanA has joined #openstack19:02
*** shehjart has quit IRC19:02
*** stewart has quit IRC19:12
*** joearnol_ has joined #openstack19:13
*** joearnold has quit IRC19:14
*** joearnold has joined #openstack19:14
*** mfischer has quit IRC19:15
*** HugoKuo__ has quit IRC19:15
*** FallenPegasus has quit IRC19:17
*** joearnol_ has quit IRC19:17
*** hingo has quit IRC19:20
uvirtbotNew bug: #817178 in nova "it is possible to create networks with the same cidr even though logic exists to prevent it" [Undecided,New] https://launchpad.net/bugs/81717819:21
*** lorin1 has joined #openstack19:26
*** rupakg has quit IRC19:34
*** xcombelle has joined #openstack19:36
*** fabiand__ has left #openstack19:38
*** AhmedSoliman has quit IRC19:38
*** ktbe has quit IRC19:42
*** brd_from_italy has joined #openstack19:43
*** robix has quit IRC19:46
*** jaypipes has quit IRC19:49
*** ppushor has joined #openstack19:49
*** bcwaldon_ has quit IRC19:50
*** bcwaldon has joined #openstack19:51
*** fabiokung has quit IRC19:55
*** fabiokung has joined #openstack19:55
*** k0stask has joined #openstack19:59
*** deepa has quit IRC20:06
*** floehmann has joined #openstack20:11
*** huslage_ has joined #openstack20:15
*** koolhead17 has quit IRC20:15
*** koolhead17 has joined #openstack20:16
*** huslage has quit IRC20:17
*** huslage_ is now known as huslage20:17
*** rcc has joined #openstack20:18
*** rcc_ has joined #openstack20:18
*** rcc_ has left #openstack20:18
*** hadrian has joined #openstack20:21
*** cereal_bars has joined #openstack20:24
*** AWR_ has joined #openstack20:24
*** AWR_ has quit IRC20:25
*** brd_from_italy has quit IRC20:28
*** Ephur has joined #openstack20:29
*** rcc has quit IRC20:29
*** med_out is now known as medberry20:32
*** lorin1 has quit IRC20:45
*** ctennis has quit IRC20:46
*** timr has joined #openstack20:52
*** tadhgred has quit IRC20:54
*** xnyl has joined #openstack20:56
xnylwhat do you guys use to create the images for your cloud ? kvm? what about if you use xenserver as the underlaying hypervisor ?20:57
*** martine has quit IRC21:00
*** msivanes has quit IRC21:02
*** aamonten has quit IRC21:02
*** mdomsch has joined #openstack21:04
ppushorNot sure what you mean by create but I am using kvm.  It's really a features / preference judgement call.21:07
*** mahmoh has joined #openstack21:08
*** huslage has quit IRC21:10
*** Ephur has quit IRC21:10
catarrhineHas anyone tried this: http://deliver.citrix.com/projectolympus  what's so special about it exactly?  What does it have for a client, is it storage or just hpc?21:20
uvirtbotNew bug: #817228 in nova "contrib/nova.sh does not work from within a screen" [Undecided,In progress] https://launchpad.net/bugs/81722821:21
*** jsalisbury has quit IRC21:21
*** rjimenez has joined #openstack21:25
*** ryker has quit IRC21:26
*** elventails has joined #openstack21:26
*** ryker has joined #openstack21:26
*** k0stask has quit IRC21:27
*** FallenPegasus has joined #openstack21:28
*** k0stask has joined #openstack21:31
*** koolhead17 has quit IRC21:33
*** PeteDaGuru has joined #openstack21:35
*** FallenPegasus has quit IRC21:36
*** PeteDaGuru has quit IRC21:36
*** PeteDaGuru has joined #openstack21:36
*** pdjan has joined #openstack21:38
*** PeteDaGuru has quit IRC21:38
*** PeteDaGuru has joined #openstack21:38
*** joearnold has quit IRC21:39
*** dannf has joined #openstack21:41
*** joearnold has joined #openstack21:44
ppushorProject Olympus is simply going to be a bundle of openstack and a pre-configured and tested Xen install together.21:44
*** caribou has quit IRC21:45
pdjantrying swift install on multiple servers, can't seem to get proxy server started. netstat -ntpl shows nothing listening on port 8080, swift tool 'st - A https://xxxx:8080/auth/v1.0 -U system:root -K testpass stat' reports socket.error: Error 111 Connection refused. Any ideas or suggestions?21:45
*** FallenPegasus has joined #openstack21:45
*** lts has quit IRC21:46
creihtpdjan: did you check syslog on the server for errors?21:46
pdjancreiht: The only error I see in syslog is, proxy-server  UNCAUGHT EXCEPTION#012Traceback.........21:49
creihtpdjan: can you paste the whole thing to paste.openstack.org?21:49
*** tryggvil has joined #openstack21:49
pdjancreiht: I've been trying multi-server install for few days, and i always get this exception. I must be making some obvious mistake, although I got single server install without any issues following SAIO instructions. Sure I will paste it give me a sec21:50
creihtk21:50
*** thingee has joined #openstack21:51
*** dspano has quit IRC21:52
*** willaerk has joined #openstack21:54
*** mfer has quit IRC21:55
*** cereal_bars has quit IRC21:55
pdjancreiht: http://paste.openstack.org/raw/1962/21:55
creihtpdjan: also, what version are you installing and which url for the instructions are you following?21:55
*** ryker has quit IRC21:56
*** ryker1 has joined #openstack21:56
*** aliguori has quit IRC21:56
creihthrm... looks like the traceback is getting truncated21:57
*** matiu_ has joined #openstack21:57
*** msinhore has quit IRC21:57
creihtpdjan: can you paste your proxy config?21:57
pdjancreiht: newest version swift 1.4.2 on Ubuntu 10.4, following http://swift.openstack.org/howto_installmultinode.html but then I mixed a little with SAIO instructions on the storage nodes21:57
creihtk21:57
*** negronjl has quit IRC21:58
*** stanchan has quit IRC21:58
pdjanhttp://paste.openstack.org/raw/4PwcCqvoEEMgxKTTizLy/21:59
creihtpdjan: do you have swauth installed in addition to swift?22:00
*** matiu__ has joined #openstack22:00
pdjancreiht: yep22:00
creihtk22:00
creihtI'm flying a little blind since the whole stacktrace isn't printed22:00
creihtbut it looks like it is a problem with trying to read the config22:01
creihtpdjan: give me a sec to whip up a test22:01
pdjancreiht: sure22:01
*** stanchan has joined #openstack22:03
*** sriramkr has quit IRC22:03
pdjancreiht: see if this helps, http://paste.openstack.org/raw/1964/22:03
*** matiu_ has quit IRC22:04
*** ecarlin has joined #openstack22:04
*** matiu__ is now known as matiu22:04
creihtpdjan: try running this on your proxy node:22:04
creihtpython -c "from paste.deploy import loadapp; loadapp('config:/etc/swift/proxy-server.conf')"22:05
*** matiu has quit IRC22:05
*** matiu has joined #openstack22:05
creihtand see if it returns an error, if so paste that :)22:05
*** ecarlin has quit IRC22:06
uvirtbotNew bug: #817238 in swift "Some exceptions get truncated in the logs" [Undecided,New] https://launchpad.net/bugs/81723822:06
pdjancreiht: http://paste.openstack.org/raw/1965/22:08
creihtahh... there we go22:08
creihthrm22:08
pdjanbtw I found same Importerror while running swauth-prep, modified the code to say 'from urlparse import urlparse'. not sure if it helps22:09
creihtpdjan: we had to have a custom urlparse to support ipv622:09
creihtpdjan: so if you open a python shell, can you run:22:10
creihtfrom swift.common.utils import TRUE_VALUES22:11
creiht?22:11
creihtjust to make sure the swift base stuff is installed22:11
*** aliguori has joined #openstack22:11
pdjancreiht: previous swauth-prep code was pointing to "from swift.common.utils import urlparse" but I looked utils doesn't have module named urlparse22:12
pdjancreiht: sure let me try22:12
creihtyeah that is what  is confusing me22:12
*** negronjl has joined #openstack22:12
creihtas it should be there :)22:12
creihtpdjan: did you install from packages, or code?22:12
*** ameade has quit IRC22:14
pdjancreiht: yes, I can run the command 'from swift.common.utils import TRUE_VALUES'. I installed using packages "apt-get install swift-proxy" as per multinode instructions22:14
*** thingee has left #openstack22:14
creihtk22:14
creihtjust wanted to verify22:14
pdjancreiht: no worries22:15
* creiht sighs22:18
creihtpdjan: I think you have 1.2 installed :(22:19
*** tomeff has quit IRC22:19
creihtpdjan: run: python -c "import swift; print swift.__version__"22:19
*** ianloic has quit IRC22:20
*** scottsanchez has quit IRC22:20
*** agoddard has quit IRC22:20
pdjancreiht: really....jeez, yeah. that confirms it22:20
*** vernhart has joined #openstack22:20
creihtpdjan: ok our ppas are screwed up (again) :(22:20
pdjancreiht: how did I get 1.2, I don't understand22:20
creihtpdjan: I'm really sorry about this22:20
*** negronjl has quit IRC22:20
creihtgive me a min to track this down22:20
*** negronjl has joined #openstack22:21
creihtsoren, mtaylor: anyone remember who set the names for all of our ppas for ~core-swift?22:21
pdjancreiht: no worries, as long as I get it installed correctly, I really want to understand swift better22:21
creihtpdjan: no worries... thanks for your patience22:21
creihtpdjan: ok so in those instructions: the repo to add should be:22:22
creihtadd-apt-repository ppa:swift-core/release22:23
creihtinstead of /ppa22:23
creihtannegentle: -^22:23
*** negronjl has quit IRC22:23
*** ryker1 has quit IRC22:23
*** negronjl has joined #openstack22:23
*** ryker has joined #openstack22:23
*** hingo has joined #openstack22:24
creihtsoren, mtaylor: ignore my last request, we'll get the docs changed22:24
pdjancreiht: Great, I will give it a try tomorrow and get back with you guys if I see any issues. Thanks for sorting this out : )22:25
creihtpdjan: no problem, and thanks for reporting the issue22:25
creihtpdjan: pleaes let me know if it works :)22:25
pdjancreiht: Do you recommend re-running the install by adding ppa:swift-core/release or start over or can I do uninstall of swift somehow?22:27
*** ryker has quit IRC22:28
creihtpdjan: not sure actually... my apt-foo isn't that great22:30
*** ryker has joined #openstack22:31
pdjancreiht: alright, will let you know how it goes. Thanks22:31
*** jakedahn has joined #openstack22:31
creihtpdjan: I think if you just add the right ppa, then do an apt-get update, apt-get upgrade, that may give you the newer versions22:32
*** gaitan has quit IRC22:32
*** jakedahn has quit IRC22:33
*** negronjl_ has joined #openstack22:33
*** gondoi has quit IRC22:33
*** negronjl has quit IRC22:34
*** pdjan has quit IRC22:35
*** FallenPegasus has quit IRC22:36
*** Shentonfreude has quit IRC22:39
*** hingo has quit IRC22:41
*** bcwaldon has quit IRC22:41
vishycreiht: ttx ... here ... http://wiki.openstack.org/PPAs22:42
*** PeteDaGuru has quit IRC22:43
*** PeteDaGuru has joined #openstack22:44
*** kbringard has quit IRC22:44
*** ianloic has joined #openstack22:44
*** ryker1 has joined #openstack22:45
*** ryker has quit IRC22:45
*** ldlework has quit IRC22:45
*** scollier has joined #openstack22:46
*** ctennis has joined #openstack22:47
*** ctennis has joined #openstack22:47
*** PeteDaGuru has quit IRC22:48
*** abhi_ has joined #openstack22:49
*** mattray has quit IRC22:49
*** bsza has quit IRC22:50
*** HowardRoark has quit IRC22:50
*** abhi__ has joined #openstack22:51
*** andy-hk has quit IRC22:51
*** andy-hk has joined #openstack22:52
*** abhi__ has quit IRC22:52
*** abhi_ has quit IRC22:52
*** camm_ has quit IRC22:56
*** msinhore has joined #openstack22:58
*** ianloic has quit IRC23:02
*** jakedahn has joined #openstack23:02
*** mnour has joined #openstack23:09
*** miclorb_ has joined #openstack23:14
*** jkoelker has quit IRC23:14
*** willaerk has quit IRC23:15
*** jeffjapan has joined #openstack23:17
*** countspongebob has joined #openstack23:18
countspongebobAnyone at OSCON?23:19
*** tryggvil has quit IRC23:19
*** tryggvil has joined #openstack23:22
*** joearnold has quit IRC23:28
*** ccustine has quit IRC23:29
uvirtbotNew bug: #817265 in nova "resize exception on xenserver" [Undecided,New] https://launchpad.net/bugs/81726523:31
*** HowardRoark has joined #openstack23:31
*** mfischer has joined #openstack23:36
*** ianloic has joined #openstack23:37
*** mfischer has quit IRC23:40
*** negronjl_ is now known as negronjl23:40
*** ppushor has quit IRC23:40
*** mfischer has joined #openstack23:47
*** ldlework has joined #openstack23:48
*** scottsanchez has joined #openstack23:53
*** xnyl has quit IRC23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!