Tuesday, 2011-07-26

*** FallenPegasus has quit IRC00:04
*** mfischer has quit IRC00:05
*** hingo has quit IRC00:09
*** huslage has joined #openstack00:11
*** stewart has quit IRC00:17
*** jeffjapan has joined #openstack00:21
*** ujjain has joined #openstack00:26
*** mfischer has joined #openstack00:28
*** ldlework has joined #openstack00:37
*** msinhore has quit IRC00:40
*** mnour has joined #openstack00:41
*** FallenPegasus has joined #openstack00:44
*** ldlework has quit IRC00:45
*** ksteward2 has joined #openstack00:46
kpeppleis anyone using opensource xen with nova ? having some issues with libvirt templates ...00:48
*** countspongebob has quit IRC00:48
*** freeflying has quit IRC00:51
*** freeflying has joined #openstack00:52
*** freeflying has quit IRC00:54
*** leted has quit IRC00:54
*** marrusl has quit IRC00:56
*** mnour has quit IRC00:57
*** freeflying has joined #openstack00:57
*** ccc11 has joined #openstack00:59
*** rjimenez has quit IRC01:01
*** jdurgin has quit IRC01:01
*** vladimir3p has quit IRC01:03
*** obino has quit IRC01:04
*** miclorb_ has quit IRC01:07
*** jmckenty has quit IRC01:09
*** alice has joined #openstack01:13
*** goodpie has quit IRC01:13
*** deepest has joined #openstack01:13
deepestHi everyone01:14
deepestI need some help from you guys01:14
deepestI installed openstack succesfully01:14
deepestand now I want to configure the VNC on Dashboard with the tutorial of HUGO POT like this way http://hugokuo-hugo.blogspot.com/2011/04/implement-instance-vnc-console-on.html#more01:15
*** dragondm has quit IRC01:15
deepestI got failed in this way01:15
*** dantoni has quit IRC01:16
deepestcan any people help me to solved this problem?01:16
deepestI cannot find the NOVA-DIRECT-API to run up01:16
*** maplebed has quit IRC01:17
deepestwho know about it, please tell me01:17
deepestthank U in advance01:17
*** ccc11 has quit IRC01:18
*** mfischer has quit IRC01:20
*** ccc11 has joined #openstack01:21
*** ccc111 has joined #openstack01:22
*** ccc11 has quit IRC01:22
*** t9md has joined #openstack01:23
*** tryggvil_ has joined #openstack01:23
t9mdhi,  I uploaded template os dis image via nova-api with qcow2 format.01:24
*** ike has joined #openstack01:24
t9mdqcow2 format is thin-provisioned small disk image, but after-uploaded , nova save template disk image as raw format means, big-actual size disk image.01:25
t9mdthis cause nova-api hung, and copying compute node is time consuming.01:26
uvirtbot`New bug: #816194 in nova "Nova code coverage percentage unclear" [Undecided,New] https://launchpad.net/bugs/81619401:26
*** ton_katsu has joined #openstack01:26
t9mdIs the any way to let nova-api save qcow2 as-is and copy  hin-provisoned qcow2. to _base  on compute-node.01:28
*** deepest has quit IRC01:28
*** ccc111 has quit IRC01:30
*** huslage has quit IRC01:30
ikeanyone use H3C switch for openstack nova vlan mode?01:32
*** nmistry has joined #openstack01:33
*** deshantm_away is now known as deshantm01:39
*** FallenPegasus has quit IRC01:40
HugoKuo__deepest , how about the files in tar ball ?01:41
*** clauden has quit IRC01:45
*** jason has joined #openstack01:46
*** hingo has joined #openstack01:46
*** jason is now known as Guest5590701:46
*** Guest55907 is now known as jasona01:47
jasonaso, anyone awake enugh to want to discuss openstack storage ?01:47
*** leted has joined #openstack01:47
kpepplejasona: what kind of openstack storage ? block (nova-volume) or object (swift) ?01:49
HugoKuo__I wondering to know the I/O performance of diskless compute-node   XD01:50
jasonakpepp: either, both. i'm still reading and trying to get a feel for the project and what it is really trying to do and/or what people are using it for01:51
jasonahave a research focused archive storage project which is starting in the 0.2-1P area but eventually wants to scale into ~100-200P.01:51
jasonai'm trying to get a feel for how openstack hooks into the underlying physical storage - and it seemed easiest to talk to someone who has actually put it into use.01:52
*** ccc11 has joined #openstack01:53
kpepplejasona: so we have Swift (the object storage) going into production for our public cloud later this week01:54
*** nmistry has quit IRC01:54
*** nmistry_ has joined #openstack01:54
jasonakpepp: and swift is probably what i am more interested in.  how are you implementing it if you can talk about details ?01:55
kpepplejasona: we are based on the cactus codebase which came out in april. what kind of details do you want ?01:55
*** hadrian has quit IRC01:55
jasonakpepp: whatever you're willing to talk about :) what hardware did you run on. what was your learning process with this etc01:57
jasonai assume cactus because diablo isn't actually 'out' yet ?01:58
kpepplejasona: correct01:59
HugoKuo__kpepple , r u familiar with new Dashboard ,does keynote instance is required ?02:00
jasonakpepp: so, is openstack in your env being run with the other openstack components ? or can you mix and match openstack storage with other technologies ?02:00
kpeppleHugoKuo__: haven't installed dashboard since they started "overhualing" it for OS API instead of EC2 API02:01
kpepplejasona: yes, we have a compute (nova) service also02:01
HugoKuo__kpepple , ok .... I'll have a try with keynote first02:02
kpepplejasona: having said that, swift is an object store not a file server or volume service. it operates conceptually similar to ftp or webdav, not CIF or NFS.02:02
*** clauden has joined #openstack02:03
*** alice has quit IRC02:04
jasonakpepp: so how do you envision integrating into people who want cifs/nfs etc ? run that as a layer on top ?02:05
jasonai'm trying to figure out whether it's more like samfs as an architecture, rather than a filesystem.02:05
jasonaer, samqfs i should say02:06
jasonatry to keep thinking in cloud terms instead of local environment terms is hard. sigh02:06
kpepplejasona: it's closer to samfs than nfs, but not really samfs either. basically, you can only put objects, get objects and delete objects. you can group objects into "containers" which emulate folders (but can't be nested). you can't edit objects/files in place -- you have to delete the old file and upload an entire new one.02:07
*** ccc11 has quit IRC02:07
kpepplejasona: if you want to use it as a filesystems, you'll need something like Fuse ...02:08
jasonaso how do you make a container available to..02:08
jasonai can conceptually deal with a container management system, just figuring out the interfaces into it02:08
*** DigitalFlux has joined #openstack02:09
*** DigitalFlux has joined #openstack02:09
jasonaand reading the swift manual is not as enlightening as i'd have hoped :)02:09
*** adjohn has quit IRC02:09
kpepplejasona: it assumes a certain level of intimacy with the codebase ...02:09
notmynamejasona: what parts of swift are you struggling with? (that may not even be the right question, help me if it's not)02:10
kpepplenotmyname: you are like a genie who appears when swift is mentioned three times :)02:11
notmynameactually I was watering the yard and talking to neighbors and just got back to see "swift" highlighted in IRC :-)02:12
*** ccc11 has joined #openstack02:13
notmynamekpepple: re: "assumes a certain level of intimacy with the codebase"  <-- in the highest tradition of man pages ;-)02:13
*** medberry is now known as med_out02:13
*** msinhore has joined #openstack02:18
kpepplenotmyname: it was a compliment02:20
*** miclorb_ has joined #openstack02:21
notmynamethanks. but I know there are improvements that can be made :-)02:22
*** RickB17 has quit IRC02:28
*** nmistry_ has quit IRC02:29
notmynamejasona: please don't be quite now. I'm working on a swift presentation right now and I'm sure these same questions will come up if I don't sufficiently cover them in my talk. I need your help too :-)02:30
*** osier has joined #openstack02:37
dweimernotmyname: Is your swift presentation targeted at users, developers, or admins? For users, I get a lot of questions about the use cases for swift since it isn't a traditional filesystem. Most of our users still see object storage as an archive replacement for our tape silos and aren't really sure what else it can be used for.02:45
*** MetaMucil has joined #openstack02:46
notmynamedweimer: honestly, I'm not sure who will be attending, so I've included a little of it all. I do have some info on use cases02:47
*** hggdh has quit IRC02:47
*** hggdh has joined #openstack02:50
*** marrusl has joined #openstack02:50
HugoKuo__does the security_group only controll by ec2 API ?02:53
HugoKuo__I did not see any cmd for security_group in novaclient or nova-manage02:54
*** ksteward2 has quit IRC02:54
*** andy-hk has joined #openstack02:56
*** FallenPegasus has joined #openstack02:58
*** cbeck has quit IRC02:59
*** cbeck has joined #openstack03:00
*** mattrobinson has quit IRC03:00
*** msinhore has quit IRC03:01
*** mattrobinson has joined #openstack03:02
*** marrusl has quit IRC03:05
*** marrusl has joined #openstack03:06
*** FallenPegasus has quit IRC03:08
*** GeoDud has quit IRC03:10
jasonasorry notmyname, was just on phone to amazon consultant who was giving me his take on openstack :)03:11
notmynameI'd love to know what he said ;-)03:11
*** ejat has quit IRC03:11
jasonanothing very controversial - he's not that familiar with openstack. we were just working out what the points of similarity were based on his work with s3/ec2 and so on03:12
jasonabut yes, use cases are of most interest03:12
jasonait's hard working out _why_ the people i'm talking to want openstack at this point03:13
jasonaand they haven't been able to explain it to me other than waving hands and pointing at nasa03:13
jasonai can wave my hands and point to the shuttle as well, but i'm not sure either of us get anywhere in that process so..03:13
notmynameswift is good for any use case that involves static (and unstructured) data that can potentially grow very large03:14
jasonahow ?03:14
jasona(how is it good in that instance)03:14
notmynamebackups, web content, document management, medical imaging, scientific data, disaster recovery, caching, storage appliances, ...03:14
notmynameit is good for these because it is designed to be very scalable across many connections (rather than optimizing one connection) and is designed to work around failures (and therefore can be used with very cheap storage)03:15
*** llang629 has joined #openstack03:16
notmynameso for example, when selecting hardware, the biggest concern becomes $/GB (more than CPU, RAM, IOPS)03:16
notmyname(well, assuming a general use case like Rackspace Cloud Files. a more specific use case may have more specific concerns)03:17
*** ejat has joined #openstack03:17
jasonahmm. you've sort of said 'why' but not 'how' to me.03:17
jasonai'm trying to match up in my head the end use case on one end, openstack in the middle and the physical hardware at the other end03:18
notmynameheh. the "how" is much more complex, but it comes down to "it's designed that way"03:18
notmynameone "how" is that there is no central authority and no single point of failure03:18
jasonanono. i appreciate how is complex, but if you ask me to explain say 'dedupe in a filesystem' i can do that at a basic level without necessarily having to get into algorithms and code.03:18
*** hingo has quit IRC03:19
notmynameanother "how" is that the system constantly checks the integrity of objects and repairs as needed03:19
jasonahere i'm trying to work out what openstack is giving you over a standard enterprise architecture model , e.g physical boxes, operating system, filesystem, filesystem presentation (e.g cifs/nfs/ftp)03:19
jasonasorry if that seems pretty basic but i want to ensure i'm not missing anything obvious in how one thinks about this.03:20
notmynameno. like I said before, you're helping me too :-)03:21
*** llang629 has quit IRC03:21
notmynamefirst, swift is not a filesystem. it's not RAID, it's not SAN/DAS/NAS03:21
notmynameswift offers  an http rest-ful interface to blobs of data (ie files or objects)03:22
notmynameswift uses commodity hardware and normal filesystems to store the data (we recommend XFS)03:23
*** neogenix has joined #openstack03:25
dweimerSmall aside on the commodity hardware. Do you know of swift clusters that have tried using desktop class disks rather than enterprise sata?03:26
notmynamedweimer: yes03:27
notmyname(most of them, I think)03:27
*** adjohn has joined #openstack03:27
notmynamejasona: another "how": updates can be done to a live, running system with no downtime and relatively little impact to the rest of the cluster03:27
dweimerThat's very good to know. I've been in a bit of a battle on that front.03:29
notmynamethe how is that swift has a map (called the "ring") that is used to determine where in the cluster an object lives (or should live). the ring can be updated with low impact to the system03:29
notmynamedweimer: we upgrade the code and add and replace servers all the time with cloud files. 0 downtime03:29
notmynamejasona: the ring has similar properties to a consistent hashing ring (where it gets its name), but that concept has been tweaked for our needs. we added the concept of zones within a ring. so we can store stuff in distinct availability zones03:33
jasonaso for me, swift is basically another abstraction layer.03:35
jasonawhich leads to, does swift sit on top of a fs or under it ? (i have been assuming under it?)03:36
notmynameon top of it03:36
jasonahuh. interesting03:36
jasonaso you recommend xfs mainly for the scalability of the fs then ?03:36
jasona(e.g into petabytes)03:36
notmyname1) buy cheap hardware 2) install linux and XFS 3) install swift and point it at the XFS drives 4) store stuff and profit03:37
jasonawhere can i fit a cifs/nfs/ftp layer in with using swift to access the data ?03:37
notmynamejasona: yes. well, each storage volume is only the size of the drive (so 2-3TB). it's more the sheer number of files. billions and billions03:37
jasonai'm used to thinking in terms of millions of files, but not billions so far.03:38
*** llang629 has joined #openstack03:38
notmynamewell, I'm not actually sure that there would be billions of inodes on a single xfs drive, but however many there are, XFS handles them better than other file systems03:39
jasonaso would you say that some of the things zfs does have been abstracted into swift then ?03:40
notmynamecifs/nfs/ftp are all block-level. swift isn't. so while you can write a translation layer, it will never be very efficient03:40
jasonahuh, how would you say 'ftp' is block level ? to me it's file level..03:41
notmynameperhaps so. it's been a while since I looked at the protocol :-)03:43
jasonaok, so what i am getting at is..03:43
jasonahow do you layer the actual presentation services if swift sits on top03:43
jasonadoes nfs/cifs etc somehow get access to the fs to access your data, or is there a translation layer (lpepp mentioned fuse) to do that03:44
jasonain plain terms03:44
jasonahow does my researcher copy his 100T of data into swift03:44
jasonaand then, how do his friends get access to it03:44
notmynamejasona: lots of http PUT requests. ACLs are supported in swift but heavily depend on your auth system.03:45
*** llang629 has left #openstack03:46
notmynameat some level your researchers data will be loaded into swift with PUT requests. you could write a nice client app (or use existing ones like cyberduck) or translation layers (like cloudfuse)03:46
*** adjohn has quit IRC03:49
notmynamejasona: an example might help: http://programmerthoughts.com/programming/quickly-uploading-data-to-cloud-files/ and http://programmerthoughts.com/programming/quickly-uploading-to-cloud-files-part-2/03:51
jasonaok. that talks about uploading files03:54
jasonathe bit i'm missing is how one actually 'uses' the data once it is in there03:55
jasonae.g there's 100T of say gene sequencing data which someone now wants to access..03:55
notmynamesimilarly, use standard http verbs: "GET /v1/joe_researcher/human_dna/chromasome_21.dat HTTP/1.1"03:57
jasonaok, so the crunch point there is swift ends up being an intermediate repostitory03:57
notmynamehow do you mean?03:58
jasonabut you can't treat it (uness there's a translation layer) as an operation layer for data manipulation03:58
notmynamelike for a distributed processing job?03:58
jasonai can't just point my HPC environment and say - there's 100T of data, you can crunch it in read only mode and output your results into another FS.03:58
jasonai have to say03:58
jasonaGET data from swift. do_stuff. repeat03:59
notmynamein (my understanding of) normal HPC jobs, you run the processing where the data is. so if you have 100T sharded across 50 boxes, run 50 jobs and only process locally04:00
jasonasorta yeah. this was the basis for asking about data presentation layers because one attractive thing about that is you can give people access to data sets to do whatever without necessarily needing extra capacity anywhere.04:00
jasonawell, that's not quite how i've been seeing things work in this instance04:00
jasonae.g i look at a traditional sgi hpx environment and it's a whole bunch of cores in one rack talking to a whole bunch of storage in another rack04:01
notmynameI concede ignorane04:01
notmynameah ok04:01
notmynameso the data is pulled over the network?04:01
jasonai don't think you should concede just yet, i think i'm still picking your brains on this :)04:01
jasonayep, the data can be04:02
jasonahere's a scenario for you which gives you what i am dealing with04:02
notmynamethen in that case, I think swift could work very well :-)04:02
jasonathere's a bunch of compute resource that is in in another state (geography)04:02
jasonasay, 100000 cores worth of compute04:02
jasonathere's a research data set that is ~1PB that lives here on basically X physical spindles (iops,capacity) with Y bandwidth.04:03
jasonabandwidth is defined as lowest available amount which in this case is 10GB ethernet and/or 40/100 GB ethernet available between storage and compute04:03
jasonadoes that sort of make sense to you ? i.e what i'm trying to explain04:04
notmynameI think so04:04
jasonaso one way of doing this without abstraction layers is fairly simple04:04
jasonai have say a storage array. it does the raid etc. i present that storage to a server, it puts a FS on top. it then runs a web server or nfs or cifs or whatever.04:05
jasonaand if i put the 1PB of research data onto that FS, the presentation layer makes it available to the compute by just mounting it over there.04:05
jasonai'm not saying this is a good way of doing it, just.. one way04:05
notmynameok :-)04:05
*** AimanA is now known as HouseAway04:06
jasonaok, so the question then becomes 'if this is one way and its pretty clear to follow the workflow on access' i am trying to understand the workflows around using swift in addition to this04:06
notmynamegive me just a couple of minutes. I need to go turn off the water in the yard04:06
jasonasure. and i have to go walk the dogs soon and give away some chicken eggs to neighbours :)04:08
*** adjohn has joined #openstack04:09
jasonaso yeah, with filesystems like xfs that scale to billions of files04:09
jasonaand with filesystems like zfs that do dedupe, crc checking and so on04:09
*** adjohn has quit IRC04:09
jasonai'm still wrapping my head around what swift adds into this04:09
dweimerI think one advantage you would get from swift in your example is scalable aggregate bandwidth. For the NFS module you need to shard your data to hit 40/100gb right? Or are you using something like pnfs?04:12
notmynamestoring 1PB of data with a single filesystem is hard. most filesystems break down. rebuild times for RAID gets really really long (zfs too? I've got a 3.5TB zfs server at home but nothing big). as dweimer points out, swift excels at aggregate throughput04:13
notmynameI'm assuming you aren't thinking this is one file (object in swift) that is 1PB, right? aggregate the data is 1PB, but it is actually many many smaller fileS?04:15
*** obino has joined #openstack04:15
notmynameswift would be much better with the many many smaller files (in fact, 1PB of data can be represented in swift as one object, but it mush be stored in smaller chunks)04:16
*** worstadmin has quit IRC04:17
*** dendro-afk is now known as dendrobates04:17
notmynamejasona: this may be a horrible analogy, but swift is similar to zfs in that it exposes a storage pool made up of many discrete storage volumes. however, swift is not block-level and it stores distinct replicas of the data (rather than parity bits and/or striping)04:20
jasonawell i wasn't really thinking about HOW swift did it but yes i see what you're saying04:23
notmynameswift can be seen as a key/value store (but isn't everything a key/value store at some level?). keys are the object name (URL) and values are the object data04:23
*** martine has quit IRC04:23
jasonai'm still thinking about WHY you swift and HOW you do the things you are doing now, and figuring out where swift fits into it.04:23
jasonathis is why i keep saying 'abstraction layer' because its fairly easy to think about this in layer format04:24
notmynamehow the code works or how one uses swift to get your example workflow done?04:24
jasonathe latter04:24
jasonai don't care that much about how the code works because i have an engineering background04:24
jasonaif it works, great, and if it's broken, patch it ;)04:24
jasona(apologies to engineers who take offence)04:25
notmynameheh. patches _always_ welcome ;-)04:25
jasonaso yeah, i'm working out swift in the workflow because i'm trying to see how that reflects on the layers underneath as well as above04:25
notmynameso let me "think out loud" about how I would do your example workflow04:26
jasonai'm dealing with storage vendors that don't seem to understand that you don't necessarily need 'features' in their storage engines (controllers/software) anymore04:26
jasonasure, thanks.04:27
notmynamefirst start with a large, empty swift cluster. many servers, many drives, well connected network (do you need more detail here?)04:27
*** worstadmin has joined #openstack04:29
notmynamethen load the dataset into the cluster. shard the data across objects (perhaps 1GB in size each). if needed, create a manifest file that allows you to access the complete dataset as one logical object04:29
jasonanope. except do you use drivs in servers ?04:29
jasonai.e is there any reason not to use storage arrays if they are cheap ?04:29
jasona(i.e equivalent cost)04:30
*** chomping has joined #openstack04:30
*** kashyap has joined #openstack04:30
jasonae.g think backblaze pods as one concept (even though its a server really..)04:30
notmynameservers could be head units with JBODs filled with consumer drives (whatever is cheapest)04:30
jasonaok. go on04:30
jasonalets assume many servers is ~100, many drives is approx 1200 and well connected network is 10Gb interfaces.04:31
notmynamebackblaze is interesting...but that's another discussion04:31
jasona(if that's ok?)04:31
notmynamesure. sounds great04:31
jasonawhich is good because that's approx 2 racks. which is a nice unit of measurement :)04:31
jasonaer, 5 racks i mean04:31
jasonaso go on..04:32
notmynamenext, write the worker that will fetch a chunk and process it. fancy things like queueing or map/reduce or etc could be added here, but most simply the worker gets an object name to load, loads it, and processes it. it can even store the result in a separate account/container in the swift cluster04:33
notmynamefire up your fancy blaster, and send the workers to your 100K cores04:34
*** openpercept_ has joined #openstack04:37
jasonawell i appreciate that but that's the ... bit before 'profit' isn't it :)04:40
jasonai.e someone has already written cifs or nfs.. but means someone has to create that worker interface before you can use the data in question ?04:40
uvirtbot`New bug: #816236 in nova "Initial 'nova db sync' migration failure on mysql due to foreign key reference" [Undecided,New] https://launchpad.net/bugs/81623604:42
notmynameI don't think that's the same04:42
notmynamesomeone has already written cifs is the same as someone has already written swift04:42
notmynameyour worker shouldn't know or care about the underlying fs in the swift cluster04:42
notmynameso the worker talks "swift" rather than "cifs"04:43
notmynamefor your layers: hardware -> os -> fs -> swift -> user04:43
jasonawell you have given me stuff to think about so thanks for the time. i haven't yet resolved in my head exactly how to proceed yet but i may just need to do a lot more reading.04:44
jasonayeah. it's the swift -> user bit i'm thinking about04:44
notmynamejasona: if you by any chance are at oscon, I'll be giving a talk on swift on wednesday04:44
jasonaespecially since with the layers i may need to paritition up that a chunk of it is swift, a chunk of it is file and a chunk is traditional block (except i'd be looking at ip based block of course)04:45
jasonauhm, not at oscon sorry (not in the right country :-) is anyone broadcasting your talk on the net ?04:45
notmynameno idea. I didn't know I was going until last Thursday :-)04:45
notmynameI'll post my slides up somewhere, I know. I wouldn't be surprised if it will be filmed04:46
jasonaif it's filmed and streamed, that'd be great. but getting your slides would be cool as well04:47
jasonaare you working in the commercial side or the uni/research side of infrastructure ?04:47
notmynamecommercial. I work for Rackspace on the Cloud Files product04:47
jasonaone of my previous employers used rackspace. in a very very trivial way. had a single rackspace server instance used for monitoring :)04:48
jasonaso you do the whole fanatical thing then ?04:48
jasonaand in terms of cloud files, is that intended to scale upto (past?) aws/s3/ec2 ?04:49
notmynamecloud files is a direct competitor to s304:50
adam_gnotmyname: wheres the swift talk?04:50
jasonayeah i figured, was just curious where you guys saw yourselves relative to s3.04:50
jasona<+notmyname> jasona: if you by any chance are at oscon, I'll be giving a talk on swift on wednesday04:50
jasonai think that's in oregon ?04:50
adam_gi meant, where at OSCON? im attending but dont remember seeing that listed anywhere on the schedule04:51
jasonaoh sorry :)04:51
notmynameadam_g: E141 at 4:10 it looks liek04:51
*** rchavik has joined #openstack04:51
*** worstadmin_ has joined #openstack04:52
*** rchavik has quit IRC04:52
notmynameI didn't write the summary, so it will be (very slightly) different04:52
*** rchavik has joined #openstack04:52
adam_goh, i think i passed over that as being an introductory session04:53
notmynameadam_g: np. like I said earlier, I'm not sure who the audience will be. I'm sure I'll have to adjust04:54
notmynamemaybe an opening line of "This talk assumes a basic familiarity with the CAP theorem and the dynamo paper. You'll remember that in section 5 of the paper that...."04:55
*** worstadmin has quit IRC04:55
jasonanow, was that combar air patrol theory or columbia appletalk protocol ?04:56
notmynameyes :-)04:57
dweimerCAP is a good point actually. Some of our users they are thrown off by eventual consistency. Being used to filesystems, they don't expect it.04:57
*** worstadmin_ has quit IRC04:58
jasonadweimer: can you elaborate please ?04:59
*** obino has quit IRC05:01
notmynameI've got a plane to catch tomorrow and I still need to pack. gotta run05:01
*** ejat has quit IRC05:01
notmynamegood talk though :-)05:01
jasonacyas NMN. good luck with talk05:01
jasonasorry i will miss it in person but.. can't see making the US in the next 24h :)05:02
jasonaand yay, now following on g+ :)05:03
dweimerjasona: Swift has processes on the backend to synchronize the various replicas. This is useful for data reliability. If you have a node go offline or become unresponsive, it will be brought back up to date when it is restored. The drawback is that some changes aren't immediately reflected on all nodes.05:04
jasonaah. hmm05:04
*** countspongebob has joined #openstack05:04
dweimerA recent example was a web interface we are trying to develop. You can delete a container and the bytes used for your account doesn't immediately decrease.05:05
jasonawell i certainly understand the bit about being used to a 'filesystem' and updates seeming to go through 'immediately' in that instance05:05
*** f4m8_ is now known as f4m805:06
*** HowardRoark has quit IRC05:08
dweimerIf you do decide to test with swift, I would be interested in your experiences. Our clusters aren't that large, but it sounds like our use cases are similar.05:10
dweimerWe're also looking at using swift for archiving data after compute. Using lustre during the compute job and then migrating it to swift for archive and external access.05:10
*** andy-hk has quit IRC05:14
*** andy-hk has joined #openstack05:14
*** snowboarder04 has joined #openstack05:27
*** snowboarder04 has joined #openstack05:27
*** reed has quit IRC05:30
*** ddutta has quit IRC05:34
*** dirakx has quit IRC05:41
jasonaare you using swift at sdsc dweimer ?05:49
jasonaand i don't consider 100x servers or 1-2pb large anymore, not with cheap storage :)05:50
jasonaand not when backblaze is pushing a model of ~200t < $10k nowadays! (different paradigms i know but still)05:50
jasonathat said, takes me a while to wrap my head around 1000x servers or 100PB.05:51
*** chetan has joined #openstack05:54
*** obino has joined #openstack05:54
*** wariola has joined #openstack05:55
*** dirakx has joined #openstack05:55
*** mandela has joined #openstack06:04
*** rupakg has quit IRC06:21
*** cbeck has quit IRC06:28
*** cbeck has joined #openstack06:29
*** guigui1 has joined #openstack06:29
*** Eyk^off is now known as Eyk06:30
*** worstadmin has joined #openstack06:30
*** mgoldmann has joined #openstack06:37
*** Aaron-huang has joined #openstack06:45
*** pothos has quit IRC06:45
*** pothos_ has joined #openstack06:46
*** pothos_ is now known as pothos06:46
*** Aaron_huang has quit IRC06:47
*** reidrac has joined #openstack06:58
*** Ephur has joined #openstack06:59
*** countspongebob has quit IRC07:04
*** Eyk is now known as Eyk^off07:06
mandelahi,i add a nova-compute node ,but can not find it in the database07:08
mandelai wonder the compute node will connect to the controler07:09
mandelahow can i find where the problem is07:09
mandelai have config the node with the ec2_host07:10
*** koolhead17 has quit IRC07:10
*** AhmedSoliman has joined #openstack07:11
mandelais there any can help me07:11
*** jaypipes has quit IRC07:11
*** ynoxen has quit IRC07:16
*** jasona has quit IRC07:19
*** Tribaal has joined #openstack07:24
*** jaypipes has joined #openstack07:24
vishymandela: you need to set rabbit_host and sql_connection flags07:26
*** dobber has joined #openstack07:28
*** Eyk^off is now known as Eyk07:29
*** chomping has quit IRC07:31
*** cbeck has quit IRC07:34
*** cbeck has joined #openstack07:35
*** katkee has joined #openstack07:37
*** koolhead11 has joined #openstack07:39
koolhead11hi all07:39
*** teratorn has left #openstack07:44
gianywhen running swauth-prep -K key -A http://<AUTH_HOSTNAME>:8080/auth/07:48
gianyI get this message : http://paste.openstack.org/show/1944/07:48
gianyany idea why?07:48
*** Funnnny has joined #openstack07:51
*** arun has quit IRC07:57
FunnnnyHello all08:03
*** miclorb_ has quit IRC08:05
*** Funnnny_ has joined #openstack08:05
*** Ephur has quit IRC08:08
*** Funnnny has quit IRC08:08
*** Funnnny has joined #openstack08:09
*** Funnnny has quit IRC08:11
*** ljl1 has joined #openstack08:12
*** Fu4ny has joined #openstack08:13
Fu4nyhi, i'm thinking about using swift to store photo in a photo gallery08:15
Fu4nyI heard that swift is more fit to a static object (like VM image, backup content) than dynamic content like photos08:15
Fu4nywill it fit in my use-case, and can I upgrade the system to store photos to store user conten (like DropBox)08:16
*** ahmed_ has joined #openstack08:18
*** AhmedSoliman has quit IRC08:20
Fu4nyI would be happy if anyone can answer even if I'm not here :)08:21
Fu4nygoing to check in eavesdrop log08:21
vishyFu4ny: why would you say photos are dynamic?08:22
vishyFu4ny: seems like a perfectly reasonable use case for swift to me08:22
Fu4nymy application lets users to resize, apply image effect...08:23
vishyah i see08:24
Fu4nywhen an object changed, I have to delete the old and upload a completly new ones, right ?08:24
vishyyes you would need to store each version08:24
Fu4nyso I think my solution is somewhere else08:26
vishyseems like any storage would have the same issue08:27
vishyseems like image effect and resizing is going to change the source enough that you will have to rewrite the file anyway08:28
Fu4nyif I wanna to change the service to store anything other than image08:29
Fu4nyI'm worrying if it fit08:30
Fu4nysomething like Dropbox08:30
reidracFu4ny: ask yourself if you could Amazon S3 as storage service (dropbox uses S3)08:30
reidrac*could use08:31
Fu4nyokay, I'll take a look08:31
Fu4nymaybe it's a reasonable choice08:32
*** kashyap has quit IRC08:35
*** shehjart has joined #openstack08:37
*** andy-hk has quit IRC08:37
*** Capashen has joined #openstack08:38
*** mnour has joined #openstack08:40
*** andy-hk has joined #openstack08:42
*** arun has joined #openstack08:43
*** arun has joined #openstack08:43
*** mandela has quit IRC08:44
*** Eyk is now known as Eyk^off08:47
*** stack has joined #openstack08:49
*** stack has quit IRC08:50
*** rods has joined #openstack08:54
*** daysmen has joined #openstack08:56
*** mnour has quit IRC08:57
*** mnour has joined #openstack08:57
*** markg has joined #openstack08:57
*** darraghb has joined #openstack08:58
*** jeffjapan has quit IRC09:09
*** kashyap has joined #openstack09:12
*** guigui1 has quit IRC09:12
*** dirakx has quit IRC09:15
*** anhdungcha has joined #openstack09:15
anhdungchaHey guys09:15
anhdungchaDo U know about VNC on Dashboard?09:15
anhdungchaPlease help me09:15
anhdungchaI got some errors information like this http://img38.imageshack.us/img38/415/dashboard3.png09:16
anhdungchathe dashboard run ok like this http://imageshack.us/photo/my-images/220/dashboard2c.jpg/09:17
anhdungchaDo I need to do anything for this problem?09:18
anhdungchahow can I solve that error?09:18
*** mnour has quit IRC09:20
*** mnour has joined #openstack09:20
*** irahgel has joined #openstack09:28
*** jahor has joined #openstack09:29
*** ahmed_ has quit IRC09:29
BK_mananhdungcha: please check that your host is able to resolve (via hosts or DNS) your compute nodes (where nova-compute is running). that might be an issue09:29
anhdungchain the recent09:30
anhdungcha172.18.15.35 is cloud controler09:30
anhdungchamy computer worker is
BK_mananhdungcha: I see. check your output of nova-manage service list and try to reach your compute worker from CC by name (ping HOSTNAME). Don't use an IP address09:31
anhdungchaYou want me to  change to computer worker IP, right?09:31
BK_mananhdungcha: no. Please read my advice above09:32
anhdungchajust me a moment09:32
*** irahgel has left #openstack09:33
BK_mananhdungcha: you can find a name of your compute worker in output of "nova-manage service list" command09:34
anhdungchaI tried to ping but I got the informatin is Unknown host09:36
BK_manfix that and your VNC console will be operational09:36
BK_manput ' your_compute_worker_hostname" into /etc/hosts on cc09:37
*** dirakx has joined #openstack09:37
anhdungchathank you so much09:38
anhdungchaI solved my problem09:38
*** tryggvil_ has quit IRC09:40
ljl1hi,all.  I have a problem. If one of storage nodes fails, such as network problem or power off, can swift automaticlly reassign patitions?09:44
*** divid_ has joined #openstack09:44
reidracI won't use it if it's offline, but in order to reassign partitions I think you need to remove the missing drives from the ring and then rebalance the cluster09:45
reidrac*it won't use it09:45
*** darraghb has quit IRC09:46
*** darraghb has joined #openstack09:47
ljl1Do i have to remove the missing driver  manully?09:50
reidracyes, you do09:50
*** ccc11 has quit IRC09:53
ljl1And I found that if I manully modify the ring, add or remove a driver, then I have to scp /etc/swift/*.ring.gz to all the storage nodes. otherwise, the chageing doesn't effect.09:53
ljl1Is thant right?09:54
*** ike has quit IRC09:55
*** anhdungcha has quit IRC09:56
reidracI think so, otherwise the other storage nodes won't know they have to manage new partitions10:00
*** tryggvil has joined #openstack10:01
*** chemikadze has joined #openstack10:03
ahale-yeah you'd need to redistribute the ring to all the storage and proxy nodes10:05
ljl1If it is true, then i don't think swift fits for huge  storage clusters. Because It needs a adminstrator working in  24 hours10:06
jahorhello, anybody used opennebula and openstack and could share some experince and notes for using in production ?10:10
*** tahoe_ has quit IRC10:10
Fu4nyopennebula has many sample in production, like CERN10:11
ljl1Another question: how proxy server knows if a storage node fail? I cann't find any information about storage nodes in /etc/log/syslog on proxy server?10:11
Fu4nyopenstack does not10:11
Fu4nyjahor: i'm talking about nova vs opennebula as it's more comparable10:12
Fu4nyopennebula doesn't have feature like swift10:12
jahorFu4ny: thanx. from my wiew it looks like opennebula is not that complicated, but it's not AWS replica, have (too) simplistic ip address assignment and not complicated storage ... and that's what interrests me in openstack as more powerfull sollution10:20
*** morfeas has quit IRC10:21
Fu4nyyeah, opennebula is a more simple (not that simple) than openstack if you just want to enable deploy VM10:22
Fu4nyopenstack is more complicated but you can do more with it10:23
jahorFu4ny: the problem is i want it for production enviroment not only testing environment, so some thins will be valuable10:23
Fu4nyboth will fit production10:24
Fu4nyit's the matter of what you need :)10:24
jahorFu4ny: interesting as you say is that openstack does not have so much production examples, because i see more hype about it10:24
*** JKERZN has quit IRC10:24
*** JKERZN has joined #openstack10:24
Fu4nyit doesn't have big production example10:25
Fu4nyit's still new ;)10:25
jahorFu4ny: thanks for sharing. it looks like i must try booth ;o)10:25
*** AhmedSoliman has joined #openstack10:27
*** miclorb_ has joined #openstack10:28
*** whitt has quit IRC10:28
*** wariola has quit IRC10:29
*** kaigoh has joined #openstack10:35
kaigohhi there!10:36
kaigohjust looking for some help if possible?10:36
kaigohI'm coming from a VMWare background...so bear with me!10:36
kaigohAt the minute, I am really confused with how networking and IP addressing works with clouds like openstack10:37
kaigohI.e. under vmware, I can assign two NICs, one with a (real) public IP and another with an internal class c ip. Can I do the same with openstack, and if so, can you point me in the right direction for some more detail?10:38
Fu4nysomeone will drop-by and answer you later10:43
Fu4nyyou should mention the component you're asking10:43
Fu4nylike "nova-network"10:43
*** mnour1 has joined #openstack10:43
*** ljl1 has quit IRC10:43
*** mnour has quit IRC10:44
kaigohis there a really in depth getting started guide to openstack anywhere?10:44
reidracljl1: sorry, I'm kind of busy :( -- you should get logs when there's a problem communicating with a storage node10:44
*** kaigoh has quit IRC10:45
reidracand these log entries will tell you which storage node is causing trouble10:46
*** ronan_ has joined #openstack10:46
gianyany idea why it shows this X-Storage-Url:
gianyI need to access the storage from a different location..10:47
gianywhere can I change that param?10:47
reidracgiany: which auth system are you using?10:49
reidracthe auth middleware is the one returning the URL of the proxy after the authentication10:49
reidracI would check the auth service configuration10:49
gianyreidrac: i'm using sawuth10:51
gianythis is how my proxy file looks like10:52
*** Fu4ny has quit IRC10:53
*** chemikadze has left #openstack10:53
reidracgiany: I've never used swauth, sorry10:54
*** divid_ is now known as divid10:55
*** willaerk has joined #openstack11:07
*** nerens has joined #openstack11:07
*** jasona has joined #openstack11:11
*** Tribaal has quit IRC11:14
*** Tribaal has joined #openstack11:14
*** markvoelker has joined #openstack11:17
*** jasona has quit IRC11:23
*** miclorb_ has quit IRC11:26
*** guigui has joined #openstack11:28
*** lorin1 has joined #openstack11:31
*** gaitan has joined #openstack11:33
*** miclorb__ has joined #openstack11:34
*** t9md has quit IRC11:34
*** smaresca has quit IRC11:37
*** ctennis has quit IRC11:37
*** daedalusflew has quit IRC11:37
*** miclorb__ has quit IRC11:41
*** duker has joined #openstack11:49
*** mfer has joined #openstack11:49
*** daedalusflew has joined #openstack11:49
*** smaresca has joined #openstack11:50
*** ctennis has joined #openstack11:51
*** brendan__ has joined #openstack11:51
*** brendan__ has quit IRC11:53
*** brendan__ has joined #openstack11:54
*** worstadmin has quit IRC12:04
*** nid0 has quit IRC12:08
*** tahoe has joined #openstack12:11
*** nid0 has joined #openstack12:12
*** soren has joined #openstack12:16
*** ChanServ sets mode: +v soren12:16
uvirtbot`New bug: #816386 in glance "test_scrubber functional tests fail on package build" [High,Confirmed] https://launchpad.net/bugs/81638612:16
*** msinhore has joined #openstack12:17
*** Tribaal is now known as Jupiter12:25
*** Jupiter is now known as Guest5311212:26
*** Guest53112 is now known as Tribaal12:29
*** martine has joined #openstack12:29
*** nmistry has joined #openstack12:32
*** lts has joined #openstack12:32
*** ronan_ has quit IRC12:33
*** guigui has joined #openstack12:34
*** Tribaal is now known as NotTribaal12:35
*** duker has quit IRC12:35
*** rchavik has quit IRC12:38
*** ejat has joined #openstack12:39
*** ejat has joined #openstack12:39
*** matiu has quit IRC12:40
*** rchavik has joined #openstack12:40
*** msivanes has joined #openstack12:41
*** nagyz has joined #openstack12:42
nagyzhi there12:42
*** ameade has joined #openstack12:42
nagyzI'm trying to get the diablo packages working from griddynamic's yum repository on RHEL6, without any luck12:43
nagyzin the past, nova-network used to automatically create the bridge, and set it up12:43
nagyzbased on google, I've managed to figure out that I need to specify the bridge_interface to nova-manage when I add a network12:43
nagyzwhat else changed in this regard?12:43
gianyany idea if there is any documentation related to S3 compatibility?12:44
*** nmistry has quit IRC12:45
nagyzor should I just go back to theolder release..12:46
*** huslage has joined #openstack12:48
*** guigui has quit IRC12:50
*** bsza has joined #openstack12:50
*** NotTribaal is now known as Tribaal12:50
*** bsza has quit IRC12:52
*** Capashen has quit IRC12:53
*** caribou has joined #openstack12:53
*** Capashen has joined #openstack12:54
*** brendan__ has quit IRC12:55
*** duker has joined #openstack12:57
*** guigui1 has joined #openstack12:58
*** hadrian has joined #openstack13:02
*** bsza has joined #openstack13:04
*** brendan__ has joined #openstack13:06
brendan__Hi has anyone managed to get windows 7 running on openstack13:07
*** shentonfreude has joined #openstack13:08
*** freeflying has quit IRC13:09
*** freeflying has joined #openstack13:10
nagyzwhat do they have to do with each other?13:11
*** olafont_ is now known as olafont13:15
sandywalshttx, I don't think I'm getting enough emails from you. Could you send more please? :)13:15
* ttx considers accepting even more money to refuse working13:16
*** cruciform has joined #openstack13:17
*** brendan1495 has joined #openstack13:19
*** DuncanT has quit IRC13:19
*** dolphm has joined #openstack13:19
*** brendan__ has quit IRC13:19
*** DuncanT has joined #openstack13:19
*** brendan1495 has quit IRC13:20
*** brendan1495 has joined #openstack13:21
uvirtbot`New bug: #816406 in nova "Service stats needs to be unified across virt layer" [Undecided,New] https://launchpad.net/bugs/81640613:21
*** pimpministerp has joined #openstack13:22
*** primeministerp has quit IRC13:23
*** pimpministerp has quit IRC13:23
*** brendan1495 has quit IRC13:24
*** bcwaldon has joined #openstack13:25
*** ton_katsu has quit IRC13:27
*** stewart has joined #openstack13:28
*** rjimenez has joined #openstack13:29
*** Ephur has joined #openstack13:30
*** primeministerp has joined #openstack13:32
*** Ephur_ has joined #openstack13:34
*** Ephur has quit IRC13:36
*** Ephur_ is now known as Ephur13:36
*** ejat has quit IRC13:37
creihtgiany: you need to change the default storage url for swauth middleware13:38
gianycreiht: i got it sorted13:38
gianysame for the Q i asked on pm13:38
creihtahh cool13:39
gianyi tried to use an S3 tool13:39
creihtas to the s3 compatibility, we don't have a lot in the way of documentation yet13:39
creihtIs all that is available at the moment13:40
gianyi found this Q13:40
gianyi was able to make that to work..13:40
gianythough..it would be nice so instead of system:<account> to use an KEY similar to what S3 have13:40
gianybut i guess its ok like that too13:41
creihtyeah that is more of a side effect of how our auth works13:41
gianyanyway i was able to test this..and after a few days of playing arround..i was able to set it up and to run some tests13:43
*** troytoman-away is now known as troytoman13:44
*** amccabe has joined #openstack13:45
*** Flint has joined #openstack13:45
*** kashyap has quit IRC13:46
*** parkerro has joined #openstack13:46
Flintnotmyname: Hi again, regarding our missing data in account_stats, I did a HEAD on the missing account and it came back with the expected data.  Yesterday you mentioned that (continued)13:46
Flintnotmyname: if the HEAD was accurate, but account_stats isn't, it means the "account updater" isn't running.  We don't see anything in the Swift repository called "account updater".  Is it also know by another name?13:48
*** f4m8 is now known as f4m8_13:48
creihtFlint: it is actually the swift-container-updater13:49
creihtit makes sure that each container has reported the correct information up to the account13:49
Flintnotmyname: ok, so that is something that we normally start (swift-init container-updater start) and I see it running on all 4 storage nodes.  BTW, yesterday I switched over from our SAIO instance to our multi-node instance (continued13:53
Flintnotmyname: to see if the same problems exist there (and they do).  The account_stats is missing one or more users and container_stats is missing one or more containers for each user.13:54
*** vladimir3p has joined #openstack13:55
creihtFlint: the first thing I would do is check the logs on each storage server to make sure the updaters are not encountering any errors13:55
Flintcreiht: Thanks!  will do.13:57
*** ryker has joined #openstack13:57
creihtThe most common reason for this is usually a configuration error13:57
creihtFlint: you can also do HEAD requests to each account, just to see if those are correct13:58
creihtthat should help you narrow down if the problem you are having is in the updators, or in the stats collection13:58
Flintcreiht: yeah, the HEAD requests look good (they show all the expected values)13:59
creihtthen I will have to defer to notmyname for the stats stuff :)13:59
*** rchavik has quit IRC13:59
*** guigui1 has quit IRC14:02
Flintcreiht: one more question please...I see we are getting some container-updater timeout errors.  they say that the operation will be retried later, but I'm unsure how to verify that.  (continued)14:02
creihtFlint: how big is your cluster?14:03
primeministerpcreiht: that sounds personal14:03
primeministerpcreiht: ;)14:03
Flintcreiht: Each storage node only has 1 partition, would it help to increas the partitions to minimize these timeouts.14:03
creiht1 drive partition, or 1 ring partition14:03
Flintcreiht: both (I believe)14:04
creihtFlint: when you initialized the ring, what partition power of 2 did you use?14:04
nagyzshouldnt nova-network (as of cactus) create br100 automatically?14:04
creihtor did you follow the default multi-node instructions?14:04
*** mattray has joined #openstack14:05
*** uksysadmin has joined #openstack14:06
Flintcreiht: the default (I think).  the numbers I see in our remakerings is 18, 3, 114:06
creihtFlint: ok, then you should be good there14:06
uksysadminhello all. quick q on messaging... are there any changes to the messaging bus used or is it all still rabbitmq?14:07
Flintcreiht: so the timeouts are a red herring to solve our problem?  so are you saying that if the HEAD responses are correct, then the problem is likely to be in the stats gathering?14:07
creihtFlint: In general the timeout occurs because the account db on that node is too busy handling other requests to handle that one in time14:07
creihtFlint: my hunch is in stats gathering14:08
creihtFlint: There are a lot of things that can cause the timeout14:09
Flintcreiht: ok, thanks.  we'll try to catch notmyname later.  Thanks again for all your help!  (and by the way, I'm not ashamed of my cluster size...grin)14:09
nagyzso, uh, anyone using cactus on rhel6?14:10
gianynagyz: i used the openstack object storage on centos 614:11
nagyzI'd like to use compute and network14:11
*** ldlework has joined #openstack14:12
creihtFlint: It may be that if you have a small number accounts, the container updater is running through the containers so fast that it is overwhelming the accounts with the updates14:14
creihtif that is the case, you can help that a bit by adjusting slowdown and account_suppression_time under [container-updater] in the container-server.conf14:14
creihtand maybe lowering the concurrency14:15
*** osier has quit IRC14:16
creihtof course once your cluster grows, you will want to adjust those back :)14:16
*** jkoelker has joined #openstack14:19
nagyzwhat I'd like to know is why I don't see br100 created automatically upon starting network14:20
creihtnagyz: I don't have relevant experience in that area, but I'm sure if you wait around a bit someone will be able to answer your question14:21
*** good_pie has joined #openstack14:24
*** cp16net has joined #openstack14:26
*** msivanes1 has joined #openstack14:32
nagyzupon starting an instance, it created br10014:33
nagyzhowever, the VM can't get an IP thru DHCP14:33
nagyzbut dnsmasq is running14:33
*** msivanes has quit IRC14:33
*** lborda has joined #openstack14:38
*** reed has joined #openstack14:38
*** Eyk^off is now known as Eyk14:43
*** tomeff has joined #openstack14:43
*** neogenix has quit IRC14:44
*** dragondm has joined #openstack14:45
*** cereal_bars has joined #openstack14:48
*** ejat has joined #openstack14:49
*** ejat has joined #openstack14:49
*** dgags has joined #openstack14:52
*** EricAtGT has joined #openstack14:52
*** dolphm has quit IRC14:55
*** amccabe has left #openstack14:56
*** dolphm has joined #openstack14:56
*** EricAtGT has quit IRC14:58
*** EricAtGT has joined #openstack14:59
*** andy-hk has quit IRC15:00
*** rnirmal has joined #openstack15:00
*** willaerk has quit IRC15:02
*** EricAtGT has quit IRC15:04
*** stewart has quit IRC15:05
*** zigo has joined #openstack15:08
*** rnirmal has quit IRC15:09
*** reidrac has quit IRC15:10
*** whitt has joined #openstack15:11
*** EricAtGT has joined #openstack15:12
*** EricAtGT has quit IRC15:13
*** cp16net has quit IRC15:14
*** zigo has quit IRC15:15
*** dobber has quit IRC15:19
*** med_out is now known as medberry15:22
*** neogenix has joined #openstack15:23
*** EricAtGT has joined #openstack15:23
*** dendrobates is now known as dendro-afk15:25
*** mfischer has joined #openstack15:26
*** nerens has quit IRC15:29
huslagenagyz: what's the error?15:30
*** EricAtGT has quit IRC15:30
*** uksysadmin has quit IRC15:30
*** openpercept_ has quit IRC15:31
kim0soren: Hi there, your openstack session should start in 30 mins .. Please ping me to confirm you'll be ready. Thanks a lot15:33
*** dspano has joined #openstack15:35
*** dspano has quit IRC15:37
*** Fu4ny has joined #openstack15:39
nagyzhuslage, it's working now.15:40
nagyzthe problem was that the firewall blocked it15:40
*** Fu4ny has left #openstack15:41
*** chomping has joined #openstack15:41
sorenkim0: I'm ready now.15:42
sorenkim0: Just needed to find some wifi coverage :)15:42
kim0soren: phew great :)15:42
kim0soren: thanks a lot15:42
sorenkim0: Sorry about that. :)15:44
kim0hehe no problemo15:44
kim0@everyone .. Howdy folks, Ubuntu cloud days (day-2) starting in #ubuntu-classroom on the hour .. see you there15:45
larissakim0: Error: "everyone" is not a valid command.15:45
*** HowardRoark has joined #openstack15:49
*** lorin1 has quit IRC15:51
*** nagyz has quit IRC15:53
*** mnour1 has quit IRC15:59
*** mnour has joined #openstack15:59
*** jsalisbury has joined #openstack15:59
*** neogenix has quit IRC15:59
*** neogenix has joined #openstack15:59
*** Guest34067 has joined #openstack16:00
*** dendro-afk is now known as dendrobates16:02
*** huslage has quit IRC16:02
*** gondoi has joined #openstack16:02
*** bonzay is now known as zz_bonzay16:04
*** Guest34067 has quit IRC16:04
*** KAM has joined #openstack16:06
*** mfischer has quit IRC16:07
*** clauden has quit IRC16:07
*** zz_bonzay is now known as bonzay16:08
*** johnpur has quit IRC16:12
*** countspongebob has joined #openstack16:12
*** odyi has quit IRC16:13
*** katkee has quit IRC16:15
*** FallenPegasus has joined #openstack16:17
*** nerens has joined #openstack16:17
*** Tribaal has quit IRC16:18
*** countspongebob has quit IRC16:19
*** Ephur has quit IRC16:19
*** odyi has joined #openstack16:24
*** odyi has joined #openstack16:24
*** hingo has joined #openstack16:25
*** aliguori has quit IRC16:26
*** rjimenez has quit IRC16:29
*** hingo has joined #openstack16:30
*** odyi has quit IRC16:30
*** mfischer has joined #openstack16:32
*** jahor has quit IRC16:32
*** EricAtGT has joined #openstack16:33
*** dspano has joined #openstack16:34
*** odyi has joined #openstack16:36
*** odyi has joined #openstack16:36
*** jdurgin has joined #openstack16:41
*** clauden_ has joined #openstack16:41
*** mgoldmann has quit IRC16:42
*** obino has quit IRC16:42
*** EricAtGT has quit IRC16:45
*** odyi has quit IRC16:46
*** cereal_bars has quit IRC16:46
*** neogenix has quit IRC16:46
*** neogenix has joined #openstack16:47
*** odyi has joined #openstack16:47
*** odyi has joined #openstack16:47
*** clauden_ has quit IRC16:48
*** clauden has joined #openstack16:49
*** odyi has quit IRC16:49
*** cereal_bars has joined #openstack16:51
*** ctennis has quit IRC16:51
*** ctennis_ has joined #openstack16:52
*** mnour has quit IRC16:54
*** mnour has joined #openstack16:54
*** ujjain has quit IRC16:54
*** stewart has joined #openstack16:55
*** aliguori has joined #openstack16:57
*** ldlework has quit IRC16:58
*** neogenix has quit IRC16:59
*** koolhead17 has joined #openstack16:59
*** ujjain has joined #openstack16:59
*** clauden has quit IRC16:59
*** neogenix has joined #openstack16:59
*** ldlework has joined #openstack16:59
*** Capashen has quit IRC17:01
*** EricAtGT has joined #openstack17:07
*** stewart has quit IRC17:09
*** tunix has joined #openstack17:10
*** good_pie has quit IRC17:10
*** tunix has quit IRC17:11
*** kbringard has joined #openstack17:11
*** alperkanat has joined #openstack17:11
*** EricAtGT has quit IRC17:11
*** alperkanat has quit IRC17:13
*** darraghb has quit IRC17:14
*** maplebed has joined #openstack17:15
*** lborda has quit IRC17:17
*** nid0 has quit IRC17:18
*** FallenPegasus has quit IRC17:20
*** ujjain has quit IRC17:21
*** EricAtGT has joined #openstack17:21
*** joearnold has joined #openstack17:21
*** morfeas has joined #openstack17:22
koolhead17soren: hey17:22
*** morfeas has quit IRC17:22
*** hingo has quit IRC17:23
*** morfeas has joined #openstack17:23
*** EricAtGT has left #openstack17:23
*** obino has joined #openstack17:25
*** neogenix has quit IRC17:25
*** huslage has joined #openstack17:27
*** duker has quit IRC17:28
*** mgius has joined #openstack17:30
*** jheiss has quit IRC17:33
*** jheiss has joined #openstack17:33
*** GeoDud has joined #openstack17:34
*** daysmen has quit IRC17:34
*** Eyk is now known as Eyk^off17:35
*** parkerro has quit IRC17:35
*** bithooki1 has quit IRC17:39
*** adrian17od has joined #openstack17:46
*** adrian17od has left #openstack17:47
uvirtbot`New bug: #816555 in nova "Attach volume fails with NameError: global name 'vol' is not defined" [Undecided,New] https://launchpad.net/bugs/81655517:47
*** neogenix has joined #openstack17:47
*** adrian17od has joined #openstack17:49
*** adrian17od has left #openstack17:49
*** adrian17od has joined #openstack17:50
*** stewart has joined #openstack17:52
*** odyi has joined #openstack17:52
*** odyi has joined #openstack17:52
*** odyi has quit IRC17:52
*** adrian17od is now known as adrian17:56
*** adrian is now known as Guest6418717:56
*** Guest64187 has left #openstack17:58
*** Guest64187 has joined #openstack18:02
*** jamshid has quit IRC18:02
*** odyi has joined #openstack18:02
*** odyi has joined #openstack18:02
*** Guest64187 has left #openstack18:03
sorenkoolhead17: hey18:05
*** adrian17od has joined #openstack18:06
koolhead17soren: long time. how have you been?18:07
*** anm3rt has joined #openstack18:08
sorenkoolhead17: disconnected, mostly.18:08
sorenkoolhead17: :)18:08
*** KarimAllah has joined #openstack18:08
sorenkoolhead17: and you?18:08
*** FallenPegasus has joined #openstack18:08
koolhead17soren: good. disconnected as well. openstack is changing like anything :)18:09
*** KAM has quit IRC18:10
*** mattray has quit IRC18:11
*** AhmedSoliman has quit IRC18:11
*** adrian17od has left #openstack18:12
*** dendrobates is now known as dendro-afk18:12
*** adrian17od has joined #openstack18:15
*** cp16net has joined #openstack18:15
*** adrian17od has joined #openstack18:16
*** mfer has quit IRC18:16
*** mfer has joined #openstack18:16
*** neogenix has quit IRC18:17
*** neogenix has joined #openstack18:17
*** KarimAllah has left #openstack18:17
*** neogenix has quit IRC18:19
*** neogenix has joined #openstack18:19
*** mattray has joined #openstack18:20
*** morfeas has quit IRC18:21
*** adrian17od has left #openstack18:22
*** dendro-afk is now known as dendrobates18:24
*** dendrobates is now known as dendro-afk18:25
*** neogenix has quit IRC18:27
*** neogenix has joined #openstack18:27
*** jheiss has quit IRC18:27
*** mnour has quit IRC18:28
*** stanchan has joined #openstack18:28
*** Eyk^off is now known as Eyk18:28
*** ldlework has quit IRC18:28
*** jheiss has joined #openstack18:29
*** ldlework has joined #openstack18:32
*** obino has quit IRC18:36
*** neogenix has quit IRC18:38
*** stanchan has quit IRC18:39
*** obino has joined #openstack18:39
*** stanchan has joined #openstack18:39
*** Flint has quit IRC18:40
*** PeteDaGuru has joined #openstack18:43
*** sante has joined #openstack18:46
*** anm3rt has left #openstack18:46
*** dendro-afk is now known as dendrobates18:49
*** nid0 has joined #openstack18:50
*** maplebed has quit IRC18:50
*** maplebed has joined #openstack18:50
*** bengrue has joined #openstack18:50
*** msivanes1 has quit IRC18:51
*** obino has quit IRC19:03
*** cp16net has quit IRC19:03
*** fabiand__ has joined #openstack19:05
*** cp16net has joined #openstack19:05
*** cp16net_ has joined #openstack19:06
uvirtbot`New bug: #816601 in nova "OSAPI: 500 error on bad server personality contents" [Undecided,New] https://launchpad.net/bugs/81660119:06
*** cp16net_ has quit IRC19:07
*** msivanes has joined #openstack19:07
*** cp16net__ has joined #openstack19:07
*** cp16net has quit IRC19:07
*** cp16net__ is now known as cp16net19:07
*** cp16net has quit IRC19:08
*** cp16net has joined #openstack19:08
*** lorin1 has joined #openstack19:10
*** nati has joined #openstack19:11
*** dendrobates is now known as dendro-afk19:11
*** msivanes1 has joined #openstack19:14
*** msivanes1 has left #openstack19:14
*** adjohn has joined #openstack19:15
*** msivanes has quit IRC19:16
*** HouseAway has quit IRC19:19
*** sante has quit IRC19:19
*** hggdh has quit IRC19:19
koolhead17annegentle: around19:19
uvirtbot`New bug: #816604 in nova "OSAPI: created and updated for /servers have incorrect time format" [Undecided,New] https://launchpad.net/bugs/81660419:22
*** adjohn has quit IRC19:25
*** lborda has joined #openstack19:25
*** clauden_ has joined #openstack19:27
*** mgius has quit IRC19:27
*** lborda has quit IRC19:28
*** tryggvil___ has joined #openstack19:29
*** dolphm has quit IRC19:31
uvirtbot`New bug: #816612 in nova "add_fixed_ip_to_instance() now requires host for multi-host networks" [Undecided,New] https://launchpad.net/bugs/81661219:31
*** PeteDaGuru has quit IRC19:32
*** tryggvil has quit IRC19:33
*** FallenPegasus has quit IRC19:34
*** troytoman is now known as troytoman-away19:37
*** ejat has quit IRC19:40
annegentlekoolhead17: yup, what's up?19:40
*** dolphm has joined #openstack19:41
koolhead17annegentle: am good. i was wondering how is documentation handled currently, as every release say cactus/bexar/diablo has major change among them self.19:42
koolhead17I don`t think a common documentation for openstack as whole is a great idea. :P19:42
*** huslage has quit IRC19:43
*** _et has joined #openstack19:43
mfischerwhy not?19:44
*** cixie has quit IRC19:44
*** garetjax has quit IRC19:45
koolhead17mfischer: because the documentation should cater everyone and every release IMHO19:46
mfischernot sure I follow...19:46
*** fabiand__ has quit IRC19:46
*** clauden_ has quit IRC19:47
*** adjohn has joined #openstack19:48
*** ddutta has joined #openstack19:50
annegentlekoolhead17, mfischer: yes it's a tough spot. So the docs.openstack.org site has bexar/  cactus/ trunk/ directories now.19:52
*** dolphm has quit IRC19:52
annegentlebut there's a lot of churn as you can imagine.19:52
koolhead17annegentle: yay!! :)19:53
*** parkerro has joined #openstack19:53
koolhead17annegentle: cool.19:53
annegentleand I'm being asked to put together an /api/ sub directory as well - because the 1.1 API will work against cactus and diablo and so on. So an API doesn't need to track with an OpenStack release.19:53
_etannegentle: the new arch dia is waaaay better than the earlier one19:53
annegentlecurrently I update cactus manually, and trunk updates are automated19:53
annegentle_et: ah, good. Yes, incremental improvements, absolutely.19:54
*** dolphm has joined #openstack19:54
koolhead17annegentle:  indeed :)19:54
annegentlewhat's awkward about the openstack-manuals doc project is that I'm the bottleneck for fixes to the Cactus docs19:54
annegentleso I should try to figure out with the CI team if we can automate cactus builds separately from trunk builds. I know we can, just have to set it up.19:55
_etannegentle: we are here to help19:55
koolhead17_et: +119:55
annegentle_et: love the contributions already, definitely supporting more people in getting started with the Image Management chapter.19:56
*** BuZZ-T has joined #openstack19:57
_etannegentle: +ttx shed some light on the arch and control flow... will trouble you if I need more help.. :)19:57
*** hggdh has joined #openstack19:58
_etannegentle: btw what tz are you on?19:58
*** FallenPegasus has joined #openstack20:01
*** aliguori has quit IRC20:02
*** AimanA has joined #openstack20:03
*** mfischer has quit IRC20:03
*** koolhead17 is now known as koolhead17|afk20:04
*** _et has left #openstack20:05
parkerroquestion about stats configuration on multi-node?  There appears to be a gz file for each storage node?  In our 4 node cluster all the entries are triplets and most have the same numeric data.  I assume this shows the replication. Is this correct?20:05
*** dolphm has quit IRC20:05
*** Flint has joined #openstack20:07
*** nmistry has joined #openstack20:10
*** mencken has quit IRC20:11
*** jheiss has quit IRC20:11
*** ejat has joined #openstack20:11
*** ejat has joined #openstack20:11
annegentleI'm in Austin TX so GMT -6 I guess?20:12
*** jheiss has joined #openstack20:12
*** nati has quit IRC20:14
*** Daviey has quit IRC20:14
*** tryggvil has joined #openstack20:15
*** dolphm has joined #openstack20:16
murkkI cannot seem to to migrate the glance registry database20:17
murkksqlalchemy.exc.OperationalError: (OperationalError) unable to open database file None None20:17
murkkthe command I use is: glance-manage --config-file=/etc/glance/glance-registry.conf --sql-connection="sqlite:///var/lib/glance/glance.sqlite" db_sync20:18
*** aliguori has joined #openstack20:18
murkkthe sqlite file does exist20:18
parkerronotmyname: working with Flint, we are trying to verify that there are indeed supposed to be "duplicate" data for containers in the container_stats for a given hour?20:19
*** cp16net has quit IRC20:25
*** Daviey has joined #openstack20:26
*** liemmn has joined #openstack20:27
*** cp16net has joined #openstack20:29
*** zigo has joined #openstack20:29
uvirtbot`New bug: #816630 in nova "broadcast ip is being assigned out as IP address" [Undecided,New] https://launchpad.net/bugs/81663020:36
*** hggdh has quit IRC20:36
*** hggdh has joined #openstack20:37
*** _adjohn has joined #openstack20:37
*** ctennis_ has quit IRC20:37
*** dgags has quit IRC20:38
*** nmistry has quit IRC20:39
*** adjohn has quit IRC20:40
*** mwhooker has joined #openstack20:40
*** _adjohn is now known as adjohn20:40
*** pothos_ has joined #openstack20:41
mwhookerHello. I got a 404 for http://nova.openstack.org/Twisted-10.0.0Nova.tar.gz when running python ./tools/install_venv.py in Nova20:41
mwhookeris Twisted still a dependency?20:41
*** pothos has quit IRC20:42
*** pothos_ is now known as pothos20:43
*** tryggvil has quit IRC20:43
*** zigo has quit IRC20:44
*** brd_from_italy has joined #openstack20:45
*** tryggvil has joined #openstack20:45
*** msinhore has quit IRC20:45
*** lorin1 has quit IRC20:51
*** PeteDaGuru has joined #openstack20:53
*** primeministerp1 has joined #openstack20:53
kpepplemwhooker: i think it is still used for nova-objectstore ...20:54
*** lborda has joined #openstack20:55
*** clauden_ has joined #openstack20:56
*** joearnol_ has joined #openstack20:58
*** asomya has joined #openstack20:59
*** liemmn has quit IRC21:00
mwhookerkpepple: okay, thanks. Any idea if the link has moved?21:01
*** joearnold has quit IRC21:01
*** hingo has joined #openstack21:01
*** troytoman-away is now known as troytoman21:02
kpepplemwhooker: my guess is that the version we are using is no longer offered and has been pulled from pypi. twisted is still at http://twistedmatrix.com/trac/21:02
mwhookerthe version the venv script uses was hosted at  http://nova.openstack.org/Twisted-10.0.0Nova.tar.gz21:03
kpepplemwhooker: ohhh ... didn't realize we had our own version21:04
*** aliguori has quit IRC21:05
*** dendro-afk is now known as dendrobates21:05
creihtkpepple: yeah at one time I think they had some special patches?21:05
*** hggdh has quit IRC21:08
*** dolphm has quit IRC21:09
*** sdadh01 has quit IRC21:10
*** jheiss has quit IRC21:12
*** cp16net has quit IRC21:12
*** jheiss has joined #openstack21:13
mwhookeranyway, I'm looking for that file if anyone knows where it went21:14
*** martine has quit IRC21:15
*** hingo_ has joined #openstack21:16
*** hingo has quit IRC21:16
*** sante has joined #openstack21:17
*** ameade has quit IRC21:17
*** aliguori has joined #openstack21:17
*** iammartian has joined #openstack21:19
*** iammartian has left #openstack21:19
*** ctennis has joined #openstack21:19
*** ctennis has joined #openstack21:19
*** sante has quit IRC21:19
*** dspano has quit IRC21:22
*** hggdh has joined #openstack21:26
*** _adjohn has joined #openstack21:29
*** shentonfreude has quit IRC21:29
*** msinhore has joined #openstack21:29
*** msinhore has joined #openstack21:29
*** katkee has joined #openstack21:30
*** adjohn has quit IRC21:31
*** _adjohn is now known as adjohn21:31
*** cereal_bars has quit IRC21:34
*** kbringard has quit IRC21:36
*** kbringard has joined #openstack21:37
*** caribou has quit IRC21:38
*** hggdh has quit IRC21:38
*** lborda has quit IRC21:40
*** jheiss has quit IRC21:43
*** hggdh has joined #openstack21:45
*** hingo_ has quit IRC21:48
*** lts has quit IRC21:51
*** carlp has joined #openstack21:52
*** dendrobates is now known as dendro-afk21:52
*** matiu has joined #openstack21:54
*** FallenPegasus has quit IRC21:54
*** Jamey has joined #openstack22:00
*** Eyk is now known as Eyk^off22:02
*** bcwaldon has quit IRC22:02
*** aliguori has quit IRC22:10
*** mfer has quit IRC22:10
*** stewart has quit IRC22:14
*** dendro-afk is now known as dendrobates22:15
*** jheiss has joined #openstack22:15
*** gaitan has quit IRC22:16
*** bsza has quit IRC22:17
*** glenc_ has joined #openstack22:25
*** joearnol_ has quit IRC22:26
*** glenc has quit IRC22:27
*** mattray has quit IRC22:29
*** kbringard_ has joined #openstack22:31
*** kbringard has quit IRC22:31
*** kbringard_ is now known as kbringard22:31
*** kbringard has quit IRC22:32
*** glenc_ is now known as glenc22:34
*** ejat has quit IRC22:39
*** MetaMucil has quit IRC22:39
*** joearnold has joined #openstack22:40
*** jsalisbury has quit IRC22:44
*** HowardRoark has quit IRC22:45
*** jsalisbury has joined #openstack22:46
*** brd_from_italy has quit IRC22:46
*** jsalisbury has quit IRC22:51
*** jsalisbury has joined #openstack22:54
*** msinhore has quit IRC22:54
*** ejat has joined #openstack22:55
*** ejat has joined #openstack22:55
*** miclorb_ has joined #openstack22:55
*** hingo has joined #openstack22:56
*** katkee has quit IRC22:56
*** troytoman is now known as troytoman-away22:58
*** alekibango has quit IRC22:59
*** medberry is now known as med_out22:59
*** joearnold has quit IRC22:59
*** joearnold has joined #openstack23:00
*** nati has joined #openstack23:01
uvirtbot`New bug: #816699 in nova "skipped tests need to be reenabled" [Critical,In progress] https://launchpad.net/bugs/81669923:01
*** asomya has quit IRC23:01
*** jeffjapan has joined #openstack23:02
*** jsalisbury has quit IRC23:04
*** FallenPegasus has joined #openstack23:05
*** nati has quit IRC23:05
*** jsalisbury has joined #openstack23:07
*** jkoelker has quit IRC23:08
*** dijenerate has joined #openstack23:09
dijenerateso... where do I get started?23:10
*** marrusl has quit IRC23:11
*** ldlework has quit IRC23:11
*** tomeff has quit IRC23:16
*** HowardRoark has joined #openstack23:19
*** FallenPegasus has quit IRC23:23
*** joearnold has quit IRC23:24
*** hingo has quit IRC23:28
*** jsalisbury has quit IRC23:29
*** jeffjapan has quit IRC23:29
uvirtbot`New bug: #816713 in nova "instance launching broken when using nova-api generated imageRefs through osapi" [Undecided,New] https://launchpad.net/bugs/81671323:31
*** xxtjaxx has quit IRC23:32
*** xxtjaxx has joined #openstack23:32
*** xxtjaxx has joined #openstack23:32
*** dragondm has quit IRC23:33
*** dragondm has joined #openstack23:35
*** markvoelker has quit IRC23:40
*** rchavik has joined #openstack23:42
uvirtbot`New bug: #816725 in nova "pep8 failures" [Undecided,New] https://launchpad.net/bugs/81672523:42
*** mfischer has joined #openstack23:45
*** matiu has quit IRC23:46
*** GeoDud has quit IRC23:46
*** matiu has joined #openstack23:51
*** FallenPegasus has joined #openstack23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!