Thursday, 2011-08-11

*** ncode has joined #openstack00:00
*** mwhooker_ is now known as mwhooker00:04
*** msinhore has quit IRC00:04
*** ton_katsu has joined #openstack00:05
*** johnmark has joined #openstack00:10
*** cereal_bars has quit IRC00:10
*** kpepple_ has quit IRC00:10
*** jsalisbury has joined #openstack00:19
*** kashyap has quit IRC00:25
*** kashyap has joined #openstack00:25
*** stewart has joined #openstack00:29
*** kashyap has quit IRC00:33
*** msinhore has joined #openstack00:35
*** nostromo56 has quit IRC00:36
*** nostromo56 has joined #openstack00:37
*** obino has quit IRC00:40
*** mrrk has quit IRC00:42
*** ccc11 has joined #openstack00:43
*** jdurgin has quit IRC00:45
*** nerdstein has joined #openstack00:45
*** dragondm has quit IRC00:46
*** kashyap has joined #openstack00:51
*** llang629 has joined #openstack00:57
*** nerdstein has quit IRC01:00
*** llang629 has left #openstack01:02
*** clauden___ has quit IRC01:02
*** anotherjesse has quit IRC01:03
*** jsalisbury has quit IRC01:09
*** stewart has quit IRC01:10
*** osier has joined #openstack01:15
*** aliguori has quit IRC01:26
*** mattray has joined #openstack01:31
*** thirai has joined #openstack01:36
*** ameade has quit IRC01:36
*** jovy has joined #openstack01:36
*** ejat has joined #openstack01:37
*** ejat has joined #openstack01:37
*** ameade has joined #openstack01:37
*** obino has joined #openstack01:38
*** thirai has quit IRC01:40
*** amccabe has joined #openstack01:41
*** rfz01 has joined #openstack01:42
*** bob-wan has joined #openstack01:44
*** bobi-wan has joined #openstack01:45
*** huslage has quit IRC01:46
rfz01Hi all, I've installed openstack, but after importing a uec image via uec-publish-tarball, the image remains in the queued status.  I'm runing natty 11.04 with the universe openstak debs.01:46
*** mrrk has joined #openstack01:48
*** deirdre_ has quit IRC01:48
*** jakedahn has quit IRC01:49
*** bengrue has quit IRC01:49
*** nelson____ has joined #openstack01:51
*** sadeghis has joined #openstack01:53
sadeghishi guys01:53
sadeghisoops wrong nick for this server01:53
*** sadeghis is now known as herbster01:53
herbsterthere we go01:53
herbsterso, hi guys :P ...we are progressing quite nicely in our openstack deployment but have hit a hitch01:53
herbsternova-api only seems to run on a single core01:54
herbsterour cloud controller box has 4 cores, we would like nova-api to run across all 4 cores01:54
herbster(when the load is high enough to demand it)01:54
*** mat_angin has left #openstack01:55
herbsterany thoughts?01:55
*** taihen has quit IRC01:56
*** cp16net has joined #openstack01:56
*** taihen has joined #openstack01:58
*** mrjazzcat has quit IRC02:00
*** arun_ has quit IRC02:02
*** msinhore has quit IRC02:06
*** worstadmin has joined #openstack02:09
*** maplebed has quit IRC02:10
*** fayce has joined #openstack02:15
*** HugoKuo has joined #openstack02:18
HugoKuomorning02:18
herbsterhi hugo02:19
*** herbster has quit IRC02:20
HugoKuohello herbster02:23
*** jfluhmann has joined #openstack02:24
*** jakedahn has joined #openstack02:25
*** GeoDud has quit IRC02:27
*** osier has quit IRC02:29
*** HugoKuo has quit IRC02:35
*** Gaurav has joined #openstack02:36
*** deepest has joined #openstack02:37
deepestHi U guys02:38
deepestI want to ask U somethings02:38
deepestI want to know about the size to store data of SWIFT02:38
deepestby the way, I installed and configured SWIFT successful02:39
Gauravzxcv098702:39
Gauravzxcv098702:39
deepestBut I want to know what happend if I have some different storage node02:39
*** jj0hns0n has joined #openstack02:40
deepestmaybe one storage node =100 GB, another one = 250 GB and last one = 500GB02:41
deepestthe maximum of transfer data will be equal 100GB or 500GB or no limit02:42
deepestDo u have any ideas? or any suggestion about that or the document what I should refer?02:43
*** jj0hns0n has quit IRC02:43
*** miclorb has quit IRC02:44
*** miclorb has joined #openstack02:44
*** jj0hns0n has joined #openstack02:44
GauravWhy does openstack use eucalyptus (euca-2ools)?02:46
*** mrrk has quit IRC02:50
*** mdomsch has joined #openstack02:53
*** RobertLaptop has quit IRC02:57
*** RobertLaptop has joined #openstack02:58
*** nostromo56 has quit IRC03:00
*** mrrk has joined #openstack03:03
*** gohko has quit IRC03:04
*** gohko has joined #openstack03:05
secbitchrisI keep hearing about this thing03:10
creihtdeepest: That is where the weight in the rings comes in to play03:13
creihteach device in the ring has a weight, so for example, you could set the weight in the ring to be the number of GB per each device03:14
creihtand it will put a porportional number of partitions on each device, so that they will fill up relatively evenly03:14
deepestcan U describe for me more details? if possible, please give me some examples03:16
*** ameade has quit IRC03:16
creihtSay for example you have some drives that are 1T and some that are 2T in your nodes03:17
deepestright03:18
creihtyou might decide to say 1T = a weight of 10003:18
*** ameade has joined #openstack03:18
creihtthusly you would want to set your 2T drives = a weight of 20003:18
creihtso when you are builidng your ring, the 1T drive may look something like:03:19
creihtswift-ring-builder object.builder add z1-127.0.0.1:6010/sdb1 103:19
creihterm03:19
*** mrrk has quit IRC03:19
creihtswift-ring-builder object.builder add z1-127.0.0.1:6010/sdb1 10003:20
creihtthe 100 at the end is the weight03:20
creihtif the next drive is a 2T drive then you would add it like:03:20
creihtswift-ring-builder object.builder add z1-127.0.0.1:6010/sdc1 20003:20
deepestok I understood03:20
deepestbut I have another question03:21
creihtk03:21
deepestwhat is the maximum size of data should I transfer Swift if I have 2 drives (first one= 1T, second one = 2T)03:22
creihtof a single file? or total space swift is capable of using?03:23
*** dhanuxe has joined #openstack03:23
deepestmaximum size of data to received of total space Swift03:24
creihtswift would use it all fairly equally, so swift would fill things up to the full 3T03:25
deepestI mean Swift have total space = 3T, drives 1 = 1T, drives 2 = 2T and maybe drives 3 = 3T. Can I send 1.5T to Swift?03:26
creihtthat said, we recommend trying to not use more than 85% of the space03:26
creihtto leave headroom for replication, failover, etc.03:26
deepestok03:26
creihtand that is total available space03:27
creihtyou have to divide by 3 (since we store 3 replicas of everything)03:27
deepestit mean I can send the data size more than minimum drives, right?03:27
deepestdont care about the drives but care about the total space, right?03:28
creihtahh... yes03:28
deepestok I understood03:28
creihtas long as you have them set up correctly in the ring with the right weights03:28
deepestHow can I define the correct weights? by 1T = 100, 2T =200 and Swift can understand?03:30
creihtyes03:30
creihtso for example, say you have 2 drives, one that is 1T, and one 2T with the above weights03:30
creihtif you write 900 objects to the system, you would find 300 on the 1T drive, and 600 objects on the 2T drive03:32
creihtthis is oversimplified a bit, but should give you an idea03:32
*** jovy has quit IRC03:33
*** nati has joined #openstack03:34
*** mdomsch has quit IRC03:35
deepestok I write one more question what I still confusing.03:35
creihtsure03:35
deepestwe consider  about the new required03:36
*** jtanner has quit IRC03:36
*** jfluhmann_ has joined #openstack03:37
deepestswift have 3 drives, 1st drives =100GB, 2sd drive= 1T, 3rd = 1T --> total 2.1T03:37
*** mattray has quit IRC03:38
deepestif I send 700GB to Swift03:38
deepestthe first 100GB will fill to drive 1,2,303:38
creihtnot exactly03:39
creihtit will fill at the same rate, the first drive will only get 10% of the files03:39
deepestbecause we have at least 3 redundant copies, right03:39
creihtoh I see03:39
creihtif you *only* have 3 drives03:40
*** jfluhmann_ has quit IRC03:40
creihtthen yes, you could only use 100GB in that case03:40
creihtswift was written to use thousands of drives03:40
deepestwhy the first drive just get 10%?03:41
deepestthe reason of reblance.03:41
deepest???03:41
creihtI was thinking that there would be more drives (as in a large production cluster)03:42
creihtbut in the corner case of having just 3 drives, it would cause a larger issue03:42
creihtin that it would fill up just as fast as the other 203:42
creihtin a larger system, if you have different sized drives, everything balances out pretty well03:43
creihtswift isn't really designed well to work on just 3 drives03:44
deepestok I understood03:45
deepestthanks you so much Creiht03:45
creihtno problem03:45
creihtnow time to get some sleep :)03:46
deepestand I can add more storage node to Swift system and just make the right weights03:46
deepestok03:46
creihtyup03:46
deepestsleep well03:46
deepesthave a nice dream03:46
creihtideally we recommend a minimum of 5 storage nodes, with several storage drives in each node03:47
creihteach storage node being in a separate node03:47
creihtyou can then add storage by adding more drives to each node03:47
creihtthen either add another machine as a new zone03:47
creihtor add 1 more machine to each zone03:47
creihtetc. etc.03:47
creihteach time updating the ring03:48
creihtthe replication will take care of making sure everything gets to where it is supposed to be03:48
*** DrHouseMD is now known as HouseAway03:48
*** worstadmin has quit IRC03:48
*** arun_ has joined #openstack03:48
*** Gaurav has quit IRC03:49
*** limu has joined #openstack03:49
deepestok Thanks03:49
limuhi,all !  Is there an open source software to access swift storage? BS or CS is ok. Thanks03:56
creihtlimu: there is a command line tool in swift, plus cyberduck is very popular03:56
*** osier has joined #openstack03:57
creihtThere are other tools as well if you have a specific use case you are looking for03:57
*** secbitchris has quit IRC03:57
creihtand now I am really going to sleep :)03:58
*** deirdre_ has joined #openstack04:01
*** rfz01 has quit IRC04:03
*** cp16net has quit IRC04:03
*** worstadmin has joined #openstack04:05
*** YorikSar has quit IRC04:06
*** YorikSar has joined #openstack04:12
*** johnmark has left #openstack04:18
*** amccabe has quit IRC04:22
*** sam_itx has quit IRC04:23
*** Alowishus has joined #openstack04:29
*** ejat has quit IRC04:35
*** ejat has joined #openstack04:35
*** ejat has joined #openstack04:35
*** nati_ has joined #openstack04:39
*** nati has quit IRC04:39
*** nati__ has joined #openstack04:39
*** nati_ has quit IRC04:40
*** YorikSar has quit IRC04:45
*** ejat has quit IRC04:49
*** openpercept_ has joined #openstack04:51
*** ejat has joined #openstack04:52
*** ejat has joined #openstack04:52
*** mat_angin has joined #openstack04:56
*** rchavik has quit IRC04:59
*** martine has quit IRC04:59
*** worstadmin_ has joined #openstack05:05
*** worstadmin has quit IRC05:05
*** kashyap has quit IRC05:09
*** mrrk has joined #openstack05:12
*** fcarsten has joined #openstack05:13
*** anotherjesse has joined #openstack05:15
*** kashyap has joined #openstack05:24
*** ejat has quit IRC05:33
*** ejat has joined #openstack05:34
*** Ryan_Lane has joined #openstack05:39
*** thimble has joined #openstack05:51
*** anotherjesse has quit IRC05:58
*** Vek has quit IRC06:02
*** _cerberus_ has quit IRC06:02
*** duffman has joined #openstack06:05
*** duffman_ has quit IRC06:05
*** Ryan_Lane has quit IRC06:06
*** guigui has joined #openstack06:06
*** guigui has left #openstack06:06
*** guigui has joined #openstack06:07
*** alekibango has quit IRC06:08
*** ejat has quit IRC06:30
*** rms-ict has quit IRC06:42
*** bobi-wan has quit IRC06:46
*** vernhart has joined #openstack06:47
*** fcarsten has left #openstack06:49
*** fayce has quit IRC06:54
*** siwos has joined #openstack06:56
*** ccc11 has quit IRC07:02
*** chetan_ has joined #openstack07:03
*** ccc11 has joined #openstack07:04
chetan_Openstack setup on wich OS  recomended :  ubuntu OR rhels  ?07:04
*** jiboumans has quit IRC07:05
*** jiboumans has joined #openstack07:05
*** dobber has joined #openstack07:12
siwosubuntu07:14
siwosdefinitely07:14
siwosyou've got an installation script for ubuntu07:15
siwosfor rhel/fedora you need to add third party repos07:15
siwosmy question: is nwfilter used by openstack right now (cactus)07:15
siwos?07:16
*** nicolas2b has joined #openstack07:21
*** foxtrotgulf has quit IRC07:22
*** mrrk has quit IRC07:22
chetan_<siwos>  i used ubuntu and  installed openstack trunk repo ubuntu .07:25
chetan_thanks lots07:25
*** dirakx1 has quit IRC07:32
*** mnour has joined #openstack07:32
*** mnour has quit IRC07:34
*** ccustine has quit IRC07:35
*** worstadmin_ has quit IRC07:35
*** miclorb has quit IRC07:40
*** ChrisAM has quit IRC07:41
*** ChrisAM1 has joined #openstack07:43
*** dhanuxe has quit IRC07:46
*** javiF has joined #openstack07:50
*** fayce has joined #openstack07:52
lotrpyhello, I want deploy openstack on a sinle compute. the python deploy.py install stop at nova-volume, service nova-volume status returns stop/waiting, then, what should I do now?08:03
*** jakedahn has quit IRC08:04
*** masuk_angin has joined #openstack08:04
*** masuk_angin has quit IRC08:05
*** dhanuxe has joined #openstack08:06
siwosnova-volume is not crucial for launching vm instances08:06
siwosin the simplest case you should be running: nova-compute, nova-network, nova-scheduler, nova-api08:07
siwosnova-volume is for attaching persistent block storage to your vm-s08:07
*** lotrpy has quit IRC08:08
*** lotrpy has joined #openstack08:09
*** darraghb has joined #openstack08:10
lotrpysiwos, thanks for the help, appreciate it and sorry for my broken English08:11
thickskinhi all08:11
thickskinis there someone who know about qcow2 image using in xen?08:12
lotrpyafter "python deploy.py install" interrupt and exit. "sudo nova-manage service list" return  nova-compute nova-scheduler nova-network enabled. I'm interesing at test the nova part. so is there anyway to bypass nova-volume (or replace with a plain file)?08:14
*** ys has joined #openstack08:18
*** zul has joined #openstack08:26
*** jeffjapan has quit IRC08:27
*** katkee has joined #openstack08:31
*** nicolas2b has quit IRC08:39
*** nicolas2b has joined #openstack08:41
*** willaerk has joined #openstack08:47
*** truijllo has joined #openstack08:50
*** tudamp has joined #openstack08:50
*** BuZZ-T has quit IRC08:54
*** teamrot has joined #openstack08:54
*** BuZZ-T has joined #openstack08:56
*** medhu has joined #openstack08:58
medhuhello all08:58
medhuany resource for "installing openstack on archlinux"08:59
*** nati__ has quit IRC09:00
*** fayce has quit IRC09:01
*** mnour has joined #openstack09:02
*** dirakx has joined #openstack09:16
*** fayce has joined #openstack09:18
*** rms-ict has joined #openstack09:19
*** jakedahn has joined #openstack09:19
*** irahgel has joined #openstack09:24
matttmedhu: no, but let us know how you get on w/ that :)09:25
*** alekibango has joined #openstack09:30
*** alekibango has quit IRC09:31
*** alekibango has joined #openstack09:31
*** alekibango has quit IRC09:32
*** alekibango has joined #openstack09:33
*** alekibango has quit IRC09:36
*** openpercept_1 has joined #openstack09:36
*** alekibango has joined #openstack09:36
*** openpercept_1 is now known as openpercept09:37
*** openpercept has quit IRC09:37
*** openpercept has joined #openstack09:37
*** alekibango has quit IRC09:38
*** openpercept_ has quit IRC09:38
*** alekibango has joined #openstack09:38
*** arun_ has quit IRC09:44
*** chetan_ has quit IRC09:56
*** arun_ has joined #openstack10:04
*** limu has quit IRC10:13
*** jj0hns0n has quit IRC10:14
*** jj0hns0n has joined #openstack10:14
teamrothej all.10:22
teamroti try to size a multinode installation, and ask me which network cards i have to choose for the10:22
teamrotservice network to connect the compute nodes with the network node, 1GBit cards or 10GBit.10:22
*** king44 has joined #openstack10:24
teamrotas i understand it, the service network are "only" used for transmitting the informations about the several nodes, and not for transfer virtual images, volumes and this kind of data.10:26
*** ccc11 has quit IRC10:26
*** dhanuxe has quit IRC10:26
*** deepest has quit IRC10:28
*** nickon has joined #openstack10:29
*** mies has joined #openstack10:30
*** alekibango has quit IRC10:31
*** alekibango has joined #openstack10:31
uvirtbotNew bug: #824433 in nova "Libvirt/KVM snapshot only in RAW image format" [Undecided,New] https://launchpad.net/bugs/82443310:31
*** medhu has quit IRC10:37
*** t9md has joined #openstack10:41
*** kidrock has joined #openstack10:43
*** nostromo56 has joined #openstack10:44
kidrockHi,10:46
kidrockI installed new openstack10:46
kidrockdiablo-3 milestone version10:46
kidrockI created  admin user and project by: nova-manage user admin thangdd810:47
kidrocknova-manage project create cncloud thangdd810:47
kidrocknova-manage network create private 10.0.1.0/24 1 256 br10010:47
kidrocknova-manage project zipfile cncloud thangdd8 creds/cncloud/nova.zip10:48
kidrockcd creds/cncloud/10:48
kidrockunzip nova.zip10:48
kidrocksource novarc10:48
kidrockafter that10:48
kidrockI run euca-describe-instances10:48
kidrockfollowing error occurred10:49
kidrockhttp://paste.openstack.org/show/2148/10:49
kidrockanyone help. Thanks beforehand10:50
*** _cerberus_ has joined #openstack10:50
*** nelson____ has quit IRC10:53
*** lorin1 has joined #openstack10:58
*** GeoDud has joined #openstack11:02
*** Aim has quit IRC11:03
*** infinite-scale has joined #openstack11:04
*** Aim has joined #openstack11:06
*** king44 has quit IRC11:06
sorenkidrock: Possibly time skew?11:07
sorenkidrock: Oh, on the same box.11:07
sorennm, then.11:07
kidrocki installed all nova service on one node11:11
*** GeoDud has quit IRC11:20
*** markvoelker has joined #openstack11:26
*** jj0hns0n has quit IRC11:27
*** nostromo56 has quit IRC11:30
*** nerdstein has joined #openstack11:34
*** jakedahn has quit IRC11:35
*** ccc11 has joined #openstack11:41
*** PeteDaGuru has joined #openstack11:43
*** jakedahn has joined #openstack11:44
*** kidrock has quit IRC11:46
*** jovy has joined #openstack11:47
*** mfer has joined #openstack11:48
*** dprince has joined #openstack12:00
*** ncode has quit IRC12:05
*** viraptor has joined #openstack12:10
*** martine has joined #openstack12:11
*** ton_katsu has quit IRC12:18
*** jfluhmann has quit IRC12:18
*** fayce has quit IRC12:19
*** huslage has joined #openstack12:20
*** t9md has quit IRC12:22
*** mies has quit IRC12:22
*** javiF has quit IRC12:34
*** lts has joined #openstack12:34
*** bsza has joined #openstack12:35
*** openpercept has quit IRC12:39
*** fayce has joined #openstack12:49
*** msivanes has joined #openstack12:52
*** Netwizard has joined #openstack12:54
*** aliguori has joined #openstack12:54
*** nmistry has joined #openstack12:55
*** alekibango has quit IRC12:59
*** alekibango has joined #openstack13:00
*** jfluhmann has joined #openstack13:03
*** Vek has joined #openstack13:07
*** vcaron has joined #openstack13:07
*** bpaluch has quit IRC13:12
*** mdomsch has joined #openstack13:13
*** aliguori has quit IRC13:14
*** aliguori has joined #openstack13:15
*** lpatil has joined #openstack13:18
*** gnu111 has joined #openstack13:21
*** huslage has quit IRC13:25
*** jtanner has joined #openstack13:25
*** jtanner has quit IRC13:26
*** jtanner has joined #openstack13:26
*** mies has joined #openstack13:27
*** amccabe has joined #openstack13:28
*** kashyap has quit IRC13:30
*** kbringard has joined #openstack13:32
*** Shentonfreude has joined #openstack13:34
*** nmistry has quit IRC13:36
*** anp has joined #openstack13:37
*** manish has joined #openstack13:40
anpHi13:41
anpI am trying to install Openstack Diablo2 on CentOS 613:41
anpI ref the url:http://wiki.openstack.org/NovaInstall/RHEL6Notes-Diablo-213:42
anpbut yum is not able to get the Dependcies from Cactus repo13:42
anphow can I do that13:42
*** mdomsch has quit IRC13:44
anpplease help me13:44
jtanneranp, it depends on which deps are missing13:44
jtanneryou can probably get better/faster help in #rhel for troubleshooting yum issues13:44
anpjtanner: I am doing this over a base installation13:45
anpso I need all deps in : http://yum.griddynamics.net/yum/cactus/deps/13:45
anphow can I do that13:45
anpit's not an yum issue I believe13:46
*** vcaron has quit IRC13:46
anpDiablo2 build needs the deps from Cactus13:46
anpjtanner: do you think I should install Cactus first and then update to the Diablo?13:47
*** imsplitbit has joined #openstack13:56
*** abced has joined #openstack14:04
*** manish has left #openstack14:05
*** manish has joined #openstack14:05
*** msinhore has joined #openstack14:06
*** lpatil has quit IRC14:07
*** ldlework has joined #openstack14:07
gnu111I got a natty image up and running but can't ssh. I was able to get to the console from virt-viewer. as I can't ssh, I can't use my keys. I would liek to login to check what is going on. How can I login to the machine?14:08
*** worstadmin has joined #openstack14:10
*** huslage has joined #openstack14:14
*** mrjazzcat has joined #openstack14:14
*** LiamMac has joined #openstack14:15
kbringardcan you ping it?14:16
kbringardgnu111 ^^14:16
gnu111no, i can't ping14:16
kbringardso the problem is that it's not getting network14:16
kbringardwell, I should step back14:16
kbringarddid you euca-authorize ICMP and ssh?14:17
gnu111yes.14:17
kbringardI would also try euca-get-console-output14:17
kbringardthat'll help see what's happening on the VM14:17
kbringardmy guess is that it's having trouble getting to the meta-data service14:17
gnu111ok14:17
kbringardthat takes awhile to time out and the booting doesn't finish until that happens14:18
kbringardbut, I really have no idea14:18
gnu111yup: 2011-08-11 14:04:38,145 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id14:18
kbringardit would seem the problem is that you're not getting network, either because it couldn't take to the DHCP server, or because the bridge didn't attach right14:18
gnu111146 - DataSourceEc2.py[WARNING]:   14:04:38 [ 1/30]: url error [[Errno 101] Network is unreachable]14:18
*** fayce has quit IRC14:19
*** rms-ict has quit IRC14:19
kbringardI'd check the network controller to make sure the interfaces and bridges are up and (if you're using vlan mode) tagged properly14:19
kbringardI'd also check the compute node for the same14:19
gnu111ok. yes, i am using vlan mode.14:19
*** nmistry has joined #openstack14:19
worstadminMake sure youre trunking :)14:19
kbringardthis ^^14:19
kbringardor more broadly, make sure your physical network can handle vlan14:20
kbringardit may be looking at the packets and going "wtf?! DROP"14:20
kbringardanyway, I've got to reboot and have some stuff to do, so I'll bbl14:21
*** rms-ict has joined #openstack14:21
*** kbringard has quit IRC14:21
*** lpatil has joined #openstack14:21
*** osier has quit IRC14:25
*** kbringard has joined #openstack14:25
*** Ephur has joined #openstack14:25
*** mgoldmann has quit IRC14:30
*** mattray has joined #openstack14:30
*** javiF has joined #openstack14:31
*** siwos has quit IRC14:36
*** jj0hns0n has joined #openstack14:36
*** rfz_ has joined #openstack14:37
*** bcwaldon has joined #openstack14:41
*** vladimir3p has joined #openstack14:41
*** jj0hns0n has quit IRC14:41
*** jj0hns0n has joined #openstack14:41
*** cp16net has joined #openstack14:42
*** jj0hns0n has quit IRC14:42
*** dendro-afk is now known as dendrobates14:43
*** gnu111 has quit IRC14:44
*** nmistry has quit IRC14:45
*** jkoelker_ has quit IRC14:47
*** jfluhmann has quit IRC14:48
*** jkoelker has joined #openstack14:48
*** whitt has joined #openstack14:50
*** dragondm has joined #openstack14:50
*** javiF has quit IRC14:52
*** javiF has joined #openstack14:52
*** bpaluch has joined #openstack14:56
*** jfluhmann has joined #openstack14:57
*** heckj has joined #openstack15:00
*** abced has quit IRC15:01
*** mnour has quit IRC15:01
*** cereal_bars has joined #openstack15:02
*** dendrobates is now known as dendro-afk15:05
*** tryggvil_ has joined #openstack15:05
*** alekibango has quit IRC15:08
*** alekibango has joined #openstack15:08
*** rnirmal has joined #openstack15:10
*** kashyap has joined #openstack15:10
*** willaerk has quit IRC15:12
*** freeflying has quit IRC15:12
*** glitchpants has quit IRC15:17
*** shoof has joined #openstack15:17
*** dendro-afk is now known as dendrobates15:19
*** jj0hns0n has joined #openstack15:21
*** lpatil has quit IRC15:22
*** Alowishus has left #openstack15:22
*** kashyap has quit IRC15:22
*** ncode has joined #openstack15:22
*** ncode has joined #openstack15:22
*** ncode has quit IRC15:25
shoofWhat would be the best way to position zones in swift to have multi data center data redundancy (2 data centers)?  Based on the 3 replicas, I would assume that 4 total zones (2 in each data center) would guarantee that all data is written to both data centers and no data is lost if an entire data center is lost15:25
*** dilemma has joined #openstack15:26
*** nerdstein is now known as nerdstein-lunch15:28
*** reed has quit IRC15:30
creihtshoof: While you could do that, it may not be optimal15:30
btorchshoof: I hope that never really happens :) to you ... but the first thing I would consider is to make sure that the swift-zones that are located in a single DC are in unique DC phases15:30
*** thimble has quit IRC15:31
creihtshoof: an alternate approach would be to have a separate cluster in each DC, and use the recently added container sync for data that absolutely has to be replicated15:32
uvirtbotNew bug: #824601 in nova "OSAPI v1.1 PUT /servers/<id> response incorrect" [Undecided,New] https://launchpad.net/bugs/82460115:32
btorchshoof: yeah like creiht said not really optimal ... creiht perhaps container sync when it gets released could be an option for having the data at another region15:32
* btorch swear is not copying what you just mentioned :)15:33
creihtshoof: can you describe your use case a bit more?  What type of connectivity do you have between the DC's, and how much data are you expecting to pass through the system?15:33
creihtbtorch: lol15:33
*** teamrot has left #openstack15:34
*** teamrot has joined #openstack15:34
*** teamrot has left #openstack15:34
*** teamrot has joined #openstack15:35
*** kashyap_ has joined #openstack15:35
*** kashyap_ has quit IRC15:35
creihtThe swift team would like to have better cross DC replication, but that is a difficult problem to solve.  The container sync functionality is the first step in trying to get there15:35
*** lpatil has joined #openstack15:36
shoofWe have a dedicated line between geographically seperated DCs.  Our goal is eventually to run active/active, but are currently using the remote DC for failover.  We are experimenting with migrating from NAS to swift to remove mounts and allow local writes in each DC.15:36
notmynamesounds like container sync would be a better option than one logical cluster across both DCs15:37
shoofInitially, would be looking at roughly 10TB in phase 115:37
*** truijllo has quit IRC15:38
*** srikanth has joined #openstack15:39
shoofcontainer-sync seems quite new - any suggestions on resources to dive into it?15:40
*** llang629 has joined #openstack15:44
*** llang629_ has joined #openstack15:45
*** guigui has quit IRC15:46
*** ncode_ has joined #openstack15:48
*** ncode_ has joined #openstack15:48
*** llang629 has quit IRC15:48
*** srikanth has quit IRC15:49
*** rms-ict has quit IRC15:50
*** alekibango has quit IRC15:53
*** alekibango has joined #openstack15:53
*** alekibango has quit IRC15:53
*** alekibango has joined #openstack15:53
*** mrsrikanth has joined #openstack15:53
*** vernhart has quit IRC15:53
*** rms-ict has joined #openstack15:56
*** jtanner has quit IRC15:59
*** reed has joined #openstack16:00
*** obino has quit IRC16:01
*** alekibango has quit IRC16:01
*** alekibango has joined #openstack16:02
*** dprince has quit IRC16:03
*** irahgel has quit IRC16:05
*** maplebed has joined #openstack16:06
*** huslage has quit IRC16:06
*** tudamp has quit IRC16:06
*** npmapn has joined #openstack16:07
*** jsalisbury has joined #openstack16:07
*** nicolas2b has quit IRC16:07
*** Ryan_Lane has joined #openstack16:09
*** mrrk has joined #openstack16:10
*** rms-ict has quit IRC16:13
*** rms-ict has joined #openstack16:19
*** mrsrikanth has quit IRC16:19
*** mrrk has quit IRC16:20
*** nati has joined #openstack16:22
*** obino has joined #openstack16:23
*** katkee has quit IRC16:24
*** Ryan_Lane1 has joined #openstack16:31
*** Ryan_Lane has quit IRC16:31
*** Ryan_Lane1 is now known as Ryan_Lane16:31
*** Ryan_Lane has joined #openstack16:31
*** dchalloner has joined #openstack16:32
*** zul has quit IRC16:33
*** cereal_bars has quit IRC16:33
dchallonerHas anyone gotten the cloudfiles java api binding library (or any java API library) working with openstack?16:35
*** anotherjesse has joined #openstack16:38
uvirtbotNew bug: #824646 in nova "API v1.1 Update server (PUT) returns "204 No Content"" [Undecided,New] https://launchpad.net/bugs/82464616:41
*** aliguori has quit IRC16:41
*** jdurgin has joined #openstack16:42
*** stewart has joined #openstack16:43
*** bengrue has joined #openstack16:45
*** rms-ict has quit IRC16:46
*** dprince has joined #openstack16:50
*** mies has quit IRC16:53
*** obino1 has joined #openstack16:59
*** obino has quit IRC16:59
*** obino has joined #openstack17:00
*** obino1 has quit IRC17:00
*** lborda has quit IRC17:01
*** deirdre_ has quit IRC17:03
*** jaypipes has quit IRC17:03
*** obino has quit IRC17:04
*** aliguori has joined #openstack17:04
*** mrrk has joined #openstack17:04
*** jaypipes has joined #openstack17:07
*** Ryan_Lane has quit IRC17:07
*** katkee has joined #openstack17:09
*** obino has joined #openstack17:12
*** jj0hns0n has quit IRC17:14
*** jtanner has joined #openstack17:18
*** bpaluch_ has joined #openstack17:20
*** anp_ has joined #openstack17:21
*** ccc11 has quit IRC17:22
*** Ryan_Lane has joined #openstack17:25
*** ldleworker has joined #openstack17:27
*** aimon_ has quit IRC17:27
*** bpaluch has quit IRC17:27
*** ldlework has quit IRC17:27
*** RobertLaptop has quit IRC17:27
*** anp has quit IRC17:27
*** arun_ has quit IRC17:27
*** duffman has quit IRC17:27
*** infinite-scale has quit IRC17:27
*** duffman has joined #openstack17:27
*** RobertLaptop has joined #openstack17:28
*** deirdre_ has joined #openstack17:29
*** ewindisch_ has joined #openstack17:32
*** rnorwood has joined #openstack17:34
*** jakedahn has quit IRC17:35
*** duffman has quit IRC17:38
*** bengrue has quit IRC17:38
*** ewindisch has quit IRC17:38
*** nijaba has quit IRC17:38
*** cbeck has quit IRC17:38
*** kocka has quit IRC17:38
*** ewindisch_ is now known as ewindisch17:38
*** odyi has quit IRC17:39
*** Hunner has quit IRC17:40
*** martine has quit IRC17:43
*** Ephur has quit IRC17:44
*** cbeck has joined #openstack17:44
*** odyi has joined #openstack17:44
*** odyi has joined #openstack17:44
*** bengrue has joined #openstack17:45
*** Hunner has joined #openstack17:48
*** arun_ has joined #openstack17:49
*** arun_ has joined #openstack17:49
*** troytoman-away is now known as troytoman17:50
*** nickon has quit IRC17:50
*** nijaba has joined #openstack17:51
*** kocka has joined #openstack17:51
*** nati_ has joined #openstack17:51
*** rms-ict has joined #openstack17:51
*** nati has quit IRC17:53
*** duffman has joined #openstack17:54
*** bockmabe has left #openstack17:54
*** dgags has joined #openstack17:55
uvirtbotNew bug: #824690 in glance "Glance incorrectly checking whether image-cache is enabled" [High,In progress] https://launchpad.net/bugs/82469017:56
*** pguth66 has joined #openstack18:02
*** jakedahn has joined #openstack18:08
*** clauden has joined #openstack18:13
*** lts has quit IRC18:14
uvirtbotNew bug: #824706 in glance "gettext interfering with nosetest runs" [Undecided,New] https://launchpad.net/bugs/82470618:21
*** markvoelker has quit IRC18:24
*** dgags has quit IRC18:24
*** vernhart has joined #openstack18:25
*** nerdstein-lunch is now known as nerdstein18:26
*** jfluhmann has quit IRC18:28
*** jfluhmann has joined #openstack18:29
*** markvoelker has joined #openstack18:30
*** mies has joined #openstack18:31
*** Ephur has joined #openstack18:34
*** dirkx has joined #openstack18:39
*** lvaughn_ has quit IRC18:40
*** hggdh has quit IRC18:41
*** shaon has joined #openstack18:42
*** hggdh has joined #openstack18:43
*** shoof has quit IRC18:45
*** whitt has quit IRC18:45
*** troytoman is now known as troytoman-away18:47
*** jsalisbury has left #openstack18:48
*** mattray has quit IRC18:49
*** dirkx has quit IRC18:49
*** johnmark has joined #openstack18:54
*** darraghb has quit IRC18:54
*** HouseAway is now known as DrHouseMD19:08
*** lvaughn has joined #openstack19:08
*** Netwizard has quit IRC19:10
*** vernhart has quit IRC19:12
*** lvaughn has quit IRC19:17
uvirtbotNew bug: #824740 in glance "Public / Private Image Tests are lacking" [Undecided,New] https://launchpad.net/bugs/82474019:21
*** Ephur has quit IRC19:21
*** stewart has quit IRC19:25
uvirtbotNew bug: #824745 in nova "OSAPI flavorRef causes 500" [Undecided,In progress] https://launchpad.net/bugs/82474519:26
*** shaon_ has joined #openstack19:29
*** shaon has quit IRC19:30
*** mszilagyi has joined #openstack19:30
*** stewart has joined #openstack19:33
*** lvaughn has joined #openstack19:36
*** Ryan_Lane has joined #openstack19:39
*** jkoelker has quit IRC19:42
*** mattray has joined #openstack19:42
*** jkoelker has joined #openstack19:43
*** stewart has quit IRC19:45
*** aliguori has quit IRC19:46
*** lvaughn has joined #openstack19:46
*** mfer has quit IRC19:49
*** Ephur has joined #openstack19:51
*** dprince has quit IRC19:54
*** rfz_ has quit IRC19:54
*** mfer has joined #openstack19:56
*** katkee has quit IRC20:05
*** jfluhmann has quit IRC20:07
*** jfluhmann has joined #openstack20:07
*** mattray has quit IRC20:09
*** mattray has joined #openstack20:10
*** aliguori has joined #openstack20:12
*** Ryan_Lane has quit IRC20:13
*** aliguori has quit IRC20:15
*** aliguori has joined #openstack20:16
*** msinhore has quit IRC20:17
*** msinhore has joined #openstack20:17
uvirtbotNew bug: #824780 in nova "Typo in update_service_capabilities: UnboundLocalError: local variable 'capability' referenced before assignment" [Undecided,New] https://launchpad.net/bugs/82478020:21
*** npmapn has quit IRC20:21
*** manish has left #openstack20:22
*** lvaughn has quit IRC20:25
*** dolphm has quit IRC20:27
*** lts has joined #openstack20:29
*** jfluhmann has quit IRC20:36
*** rlucio has quit IRC20:37
*** nerdstein has quit IRC20:38
*** Ephur has quit IRC20:39
*** lorin1 has left #openstack20:40
uvirtbotNew bug: #824794 in glance "Tables are generated outside of migration process" [Undecided,New] https://launchpad.net/bugs/82479420:46
*** msinhore has quit IRC20:47
*** mrjazzcat is now known as mrjazzcat-afk20:56
*** msivanes has quit IRC20:59
*** dolphm has joined #openstack20:59
*** stewart has joined #openstack20:59
*** corrigac has quit IRC21:04
*** ccorrigan has quit IRC21:04
*** rnorwood has quit IRC21:06
*** lpatil has quit IRC21:06
shaon_getting this error CRITICAL nova [-] failed to create /usr/lib/pymodules/python2.7/server1.MainThread-2303821:10
*** dendrobates is now known as dendro-afk21:11
*** ChanServ sets mode: +v _cerberus_21:12
*** marrusl has quit IRC21:15
*** Ephur has joined #openstack21:17
*** bsza has quit IRC21:17
*** msivanes has joined #openstack21:20
*** RobertLaptop has quit IRC21:20
*** jtanner has quit IRC21:23
*** lts has quit IRC21:24
*** martine has joined #openstack21:24
*** mfer has quit IRC21:24
*** RobertLaptop has joined #openstack21:25
*** martine has quit IRC21:26
*** lvaughn has joined #openstack21:26
*** martine has joined #openstack21:27
*** jfluhmann has joined #openstack21:33
*** devcamcar has left #openstack21:37
*** bcwaldon has quit IRC21:38
*** rfz_ has joined #openstack21:39
*** imsplitbit has quit IRC21:40
*** Eyk^off is now known as Eyk21:43
*** miclorb has joined #openstack21:47
*** Ephur has quit IRC21:49
*** AhmedSoliman has joined #openstack21:50
*** lvaughn has quit IRC21:50
*** lvaughn has joined #openstack21:51
WormManso, is there a good "Idiots Guide to Keystone" out there somewhere?21:53
*** jovy has left #openstack21:54
*** vernhart has joined #openstack21:54
nilssonWormMan, there doesn't seem to be much docs about keystone21:56
*** lvaughn_ has joined #openstack21:56
dolphmWormMan: nilsson: docs are something we're about to focus on quite a bit21:56
dolphmWormMan: nilsson: the dev guide is definitely the best place to start, but it's not 100% up to date, unfortunately: https://github.com/openstack/keystone/blob/master/keystone/content/identitydevguide.pdf?raw=true21:58
dolphmWormMan: nilsson: but that will change over the next 2-3 weeks =)21:59
WormManthanks, I'm aware the only up to date docs are often the code :)21:59
*** ncode_ has quit IRC22:00
*** lvaughn has quit IRC22:00
dolphmWormMan: i'm not even sure if the code is up to date ;)22:01
*** Shentonfreude has quit IRC22:02
*** amccabe has quit IRC22:02
*** LiamMac has quit IRC22:02
*** dolphm has quit IRC22:05
*** jfluhmann has quit IRC22:06
*** mshadle has joined #openstack22:10
mshadleis there any way to provision kvm-based VMs, and also have the VNC information included/setup?22:10
*** martine has quit IRC22:13
*** mattray has quit IRC22:14
*** msivanes has quit IRC22:16
*** kbringard has quit IRC22:16
*** cp16net has quit IRC22:19
*** rms-ict has quit IRC22:22
*** markvoelker has quit IRC22:24
*** mdomsch has joined #openstack22:28
*** rms-ict has joined #openstack22:35
*** ldleworker has quit IRC22:35
*** rms-ict has quit IRC22:39
*** PeteDaGuru has quit IRC22:47
*** mattray has joined #openstack22:48
*** mattray has quit IRC22:48
*** alekibango has quit IRC22:48
*** alekibango has joined #openstack22:48
*** miclorb has quit IRC22:54
*** alekibango has quit IRC22:56
*** alekibango has joined #openstack22:56
*** jtanner has joined #openstack22:58
*** elasticdog has joined #openstack23:03
*** Eyk is now known as Eyk^off23:03
*** jtanner_ has joined #openstack23:03
*** jtanner has quit IRC23:03
*** elasticdog has quit IRC23:04
*** elasticdog has joined #openstack23:05
*** elasticdog has quit IRC23:05
*** elasticdog has joined #openstack23:05
uvirtbotNew bug: #824853 in nova "Add user-data support to OSAPI" [Undecided,In progress] https://launchpad.net/bugs/82485323:06
*** AhmedSoliman has quit IRC23:07
heckjinstalling trunk packages onto a Natty box - getting an error on the debian install.23:11
heckjI traced it back to nova running nova-manage db sync failing, with a SQLAlchemy migration error.23:12
*** alekibango has quit IRC23:15
*** alekibango has joined #openstack23:16
*** rnirmal has quit IRC23:22
*** aliguori has quit IRC23:24
uvirtbotNew bug: #824866 in nova "nova-common (milestone3, trunk) error on installing on Ubuntu 11.04 Natty" [Undecided,New] https://launchpad.net/bugs/82486623:26
*** devcamcar has joined #openstack23:27
maplebedHi - I'm planning on setting up a good sized swift cluster that will exist in (at least) two data centers.  I want reads and writes in each location to be local and fast with eventual consistency (ideally, < 1 minute delay) to the other locations.23:27
maplebedis there a standard architecture for that style set up?23:27
maplebedis that sort of setup where http://swift.openstack.org/overview_container_sync.html can come in handy?23:28
notmynamemaplebed: best bet with the current feature in swift is two autonomous clusters with container sync23:28
notmynameyes23:28
maplebedyou say 'current feature set'...  is that sort of thing on the roadmap for a more built-in approach?23:29
*** mdomsch has quit IRC23:31
maplebeddo you know what happens if simultanous writes go to each cluster with the same key and different content?23:31
notmynameI'd like to see some better multi-DC support for one logical swift cluster. we've got some stuff planned along those lines23:31
notmynamemaplebed: last write wins, even on container sync23:32
*** stewart has quit IRC23:32
maplebedwhat's the resolution of the timestamp?  say, if both writes were during the same second will it use microsecond timing?23:32
maplebed(assuming a microsecond confilct is ... unlikely.)23:33
notmynameah....you're going to make me look at the code, aren't you :-)23:33
maplebednah, it's cool.23:33
*** nati_ has quit IRC23:33
notmynameI think it's the system resolution of time (python time.time())23:33
maplebedI'm just writing "this may well be a good idea for us" doc and anticipate the question.23:33
*** ewindisch has quit IRC23:34
maplebed>>> print time.time()23:35
maplebed1313105697.4723:35
maplebed^^^ looks like centiseconds.23:35
uvirtbotmaplebed: Error: "^^" is not a valid command.23:35
maplebedoh shush, bot.23:35
maplebedanyway.  that's good enough for me fro now. Thanks!23:36
notmynamemaplebed: it's the quivalent of "%016.05f" % (float(time.time()))23:36
*** pguth66 has quit IRC23:36
notmynametime.time() for me is 1313105796.9298823:36
*** ewindisch has joined #openstack23:37
*** nerdstein has joined #openstack23:37
maplebedhey, me too.23:37
maplebedsweet.23:37
*** dragondm has quit IRC23:37
*** ewindisch has quit IRC23:37
*** worstadmin has quit IRC23:38
*** ton_katsu has joined #openstack23:39
*** jeffjapan has joined #openstack23:40
*** ton_katsu has quit IRC23:42
*** ewindisch has joined #openstack23:47
*** miclorb has joined #openstack23:49
*** huslage has joined #openstack23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!