Sunday, 2011-05-29

*** romank has quit IRC00:06
*** jaypipes has joined #openstack00:13
*** paltman has joined #openstack00:35
*** joearnold has quit IRC00:35
*** koolhead17 has quit IRC00:48
*** dysinger has quit IRC00:54
*** toddmorey has joined #openstack00:59
*** dysinger has joined #openstack01:00
*** miclorb_ has joined #openstack01:01
*** zul has joined #openstack01:02
*** obino has joined #openstack01:21
uvirtbotNew bug: #789727 in nova "test_libvirt: cleanup instances_path dir" [Wishlist,In progress]
*** joearnold has joined #openstack01:44
*** yamahata_lt has joined #openstack01:49
*** joearnold has quit IRC01:51
*** Ryan_Lane has joined #openstack02:11
*** toddmorey has quit IRC02:19
*** Ryan_Lane1 has joined #openstack02:19
*** Ryan_Lane1 has joined #openstack02:20
*** Ryan_Lane has quit IRC02:20
*** Ryan_Lane1 is now known as Ryan_Lane02:22
*** hadrian has quit IRC03:00
*** toddmorey has joined #openstack03:07
*** gaveen has quit IRC03:09
*** sophiap has quit IRC03:10
*** gaveen has joined #openstack03:16
uvirtbotNew bug: #789741 in nova "fix instance rebuilds (compute manager and metadata)" [Undecided,In progress]
*** gaveen has quit IRC03:39
*** gaveen has joined #openstack03:52
*** cdbs has quit IRC03:57
*** zns has joined #openstack04:07
uvirtbotNew bug: #789755 in nova "OSAPI: v1.1 /servers resize action with flavorRef" [Undecided,In progress]
*** sebastianstadil has joined #openstack04:36
*** cdbs has joined #openstack04:42
*** zns has quit IRC04:44
*** cdbs has quit IRC04:48
*** toddmorey has quit IRC04:58
*** toddmorey has joined #openstack05:04
*** cdbs has joined #openstack05:10
*** toddmorey has quit IRC05:33
*** miclorb_ has quit IRC05:34
*** zenmatt has joined #openstack05:38
*** lucasnodine has quit IRC05:46
*** burris has quit IRC06:27
*** mgoldmann has joined #openstack06:38
*** sebastianstadil has quit IRC06:48
*** adjohn has joined #openstack07:04
*** toddmorey has joined #openstack07:04
*** adjohn has quit IRC07:05
*** osier has joined #openstack07:05
*** cdbs has quit IRC07:44
*** kennethkalmer has joined #openstack08:00
*** magglass1 has quit IRC08:06
*** magglass1 has joined #openstack08:14
*** allsystemsarego has joined #openstack08:21
*** magglass1 has quit IRC08:22
*** mcclurmc_ has left #openstack08:22
*** Razique has joined #openstack08:29
*** magglass1 has joined #openstack08:29
RaziqueHi all!08:29
*** cdbs has joined #openstack08:30
*** dysinger has quit IRC08:34
*** cdbs has quit IRC08:40
*** cdbs has joined #openstack08:44
*** kennethkalmer has quit IRC08:47
*** adiantum has joined #openstack08:54
*** toddmorey has quit IRC09:04
*** burris has joined #openstack09:39
*** mgoldmann has quit IRC09:39
*** Eyk^off is now known as Eyk09:56
*** burris has quit IRC10:00
*** HugoKuo__ has quit IRC10:16
*** HugoKuo has joined #openstack10:17
*** e1mer has joined #openstack10:20
*** Razique has quit IRC10:42
*** Ryan_Lane has quit IRC10:44
*** cascone has joined #openstack10:48
*** gaveen has quit IRC10:56
*** reolik has joined #openstack11:13
*** sranthony has joined #openstack11:16
*** sranthony has quit IRC11:16
*** reolik has quit IRC11:26
*** adiantum has quit IRC11:37
*** MarkAtwood has joined #openstack11:42
*** sebastianstadil has joined #openstack11:47
*** Eyk is now known as Eyk^off11:59
*** ziyadb_ has joined #openstack12:21
*** ziyadb_ has quit IRC12:27
*** cascone has quit IRC12:30
*** ziyadb has joined #openstack12:43
*** ziyadb has joined #openstack12:43
*** _william_ has joined #openstack12:44
ziyadbso you guys use a SAN for block storage w/ nova?12:46
ziyadbwhere can I read up on that?12:46
*** RobertLaptop has quit IRC12:52
*** kennethkalmer has joined #openstack12:53
ziyadbi presume swift uses a SAN for object storage as well?13:00
*** MarkAtwood has quit IRC13:07
dsockwellziyadb: i've got a fibre channel setup i'm trying out today, i'll keep you posted if you want?13:08
dsockwellnot quite sure what i'm going to do about it13:08
ziyadbdsockwell yea, I'm a network guy, I want to know what kind of infrastructure i need to build to support swift and nova storage.13:09
dsockwelli must be upfront, i haven't actually gotten to the installing-openstack part on my setup13:10
dsockwellbut from poking through the code it seems to be set up for iscsi13:10
dsockwelland from what I read about swift, it's supposed to be a clustered storage system; many machines with their own DAS that make up the service13:11
dsockwellcould be kludged with a san if you want, but if you are building out infrastructure from scratch you will probably want to do ethernet and not fc13:11
dsockwellof course i could be completely wrong13:12
ziyadbwhy hasn't anyone written a book about this?13:12
*** hadrian has joined #openstack13:12
*** ziyadb has quit IRC13:14
*** zns has joined #openstack13:23
*** ziyadb has joined #openstack13:24
*** ziyadb has joined #openstack13:24
ziyadbdsockwell not bad. but most of it is speaking from an implementation perspective, not the infrastructure required to support it. that's what im looking for.13:24
dsockwellyeah, it doesn't say much other than 1000BaseTX or faster is recommended13:27
ziyadbyou know what13:27
ziyadbi'll write a damned "building cloud infrastructure" book.13:27
ziyadbjust as soon as I figure it out13:28
dsockwellso really you've got your image storage service, your block storage (swift), and your compute nodes13:28
dsockwellwait, is swift block storage or objects?13:29
dsockwell:( looks like i need to do this for myself too13:29
ziyadbheh, swift is object storage13:30
ziyadbso you cant use swift as block storage for nova13:30
*** osier has quit IRC13:35
dsockwellanyway, good luck, sorry i'm not more help13:35
ziyadbheh thanks, let me know how your build turns out13:36
dsockwellall right.  i'm planning on throwing it on a wiki, so i'll link it here when it's got substance.13:37
ziyadbeven if not much substance :) anything helps at this point.13:39
dsockwellall right.  the plan now is to base everything off a SAN, with the nova machines booting from LUNs on my openindiana box.13:39
dsockwelli'm toying with the idea of a large, shared LUN for VMs but that's more complexity I probably won't deal with13:40
dsockwellso i'll probably just use SAN storage teh same as DAS13:40
dsockwellbut that's all done through Linux, i'm going to keep nova ignorant of it13:42
ziyadbinteresting. I have some more reading to do, reading a storage book by EMC press.13:42
ziyadbi'll ping you later.13:42
dsockwellbut, this is because I'm building this installation out of essentially spare parts13:42
ziyadbfor private use?13:43
dsockwellsort of13:43
dsockwellit's for a computer club13:43
dsockwellso if there's a performance issue we can tell people to suck it up13:44
*** lucasnodine has joined #openstack13:44
ziyadbah, i see.13:45
dsockwellanyway if i had my choice of hardware i'd put my openindiana box on a 10GE uplink to a gbit switch and run iscsi for everything, booting the nova machines from DAS13:45
ziyadbim building a public cloud13:45
dsockwellsince nova already has a driver for block storage on OI13:45
dsockwellor any solaris13:45
*** Lucas_Nodine has joined #openstack13:46
dsockwellthat way i could concentrate i/o on the cluster to a few SSDs13:46
dsockwellbut as it stands i'm doing sort of the same thing with an old 2gb FC switch13:47
ziyadbinteresting, interesting.13:47
ziyadbI need to familiarize myself with nova further before moving forward.13:47
ziyadbwhich, as you can imagine, is not as easy as it ought to be.13:48
dsockwellwhat hypervisor are you considering?13:48
*** lucasnodine has quit IRC13:49
ziyadbmost likely esx13:50
*** koolhead17 has joined #openstack13:51
dsockwellah, can't say that i've dealt with that.  i was going to say that centos6 is coming, and openstack has packages for el6 that should fit in nicely if you can deal with kvm instead of xen13:52
ziyadbmain reason we're considering esx is because I "think" it has better network support13:53
*** freeflyi1g is now known as freeflying13:54
dsockwellit might13:54
dsockwellwhat kind of networking features do you need?13:54
ziyadbwhatever is needed to support openstack13:55
ziyadbim a network guy and have been doing this open stack thing for 3 days13:55
ziyadbso im yet to fully wrap my head around it.13:55
dsockwellok, so you're not being asked for multiple interfaces per instance or whatever else?13:55
ziyadbI don't suppose we're gonna need more than a single interface per vm instance, no.13:56
ziyadbassuming we're talking about network interfaces13:57
dsockwell openstack's best networking mode, what i think you'd use for a public cloud, is l2 isolation per vm.  that's all done in Linux13:57
dsockwellthe server is the switch13:57
dsockwellso between the networking controller and the compute nodes is what amounts to an 802.11q trunk, and it's split into vlans in Linux on the compute nodes13:57
dsockwellso in that respect esx doesn't have anything special13:58
ziyadbyeah, I have an idea about all of that. But there are other things to consider, like having more than a VM in a L2 domain (i.e. extending the L2 domain from the virtual switch to a physical one which in turn extends it to another virtual switch) for backup purposes and mobility.13:58
dsockwellwell it is vanilla 802.11q, so i'm sure with enough patience you could do whatever you have in mind with just Linux.  but if there's a feature in esx you know about and I don't, go for it14:01
*** e1mer has quit IRC14:02
ziyadbdsockwell awesomeness, and i take it you mean 802.1q not 11q :)14:04
ziyadbidle around often?14:04
dsockwellah, yes14:04
dsockwelland i'm usually here14:05
dsockwellthat is if you highlight me i'll see the activity, eventually14:06
notmynameziyadb: for swift, 10g is recommended for external connections and 1g or faster is recommended for internal cluster bandwidth14:10
notmynameswift uses DAS, not SAN14:10
*** cascone has joined #openstack14:10
*** cascone has quit IRC14:13
notmynamefor a test cluster 1g could be used for external connections14:14
notmynameswift is designed for 2 things: cheap storage ($/GB) and extremely high concurrency (ie optimize aggregate throughput rather than single stream throughput)14:23
*** yamahata_lt has quit IRC14:24
ziyadbnotmyname and nova uses SAN?14:27
notmynameI can't speak for nova14:28
*** burris has joined #openstack14:28
ziyadbnotmyname awesome, thanks for your input14:29
dsockwellnova-volume can use DAS or iscsi14:29
dsockwellpretty sure most instances have a DAS component14:29
ziyadbyeah, wondering if it's possible to use block storage instead via isci or fcip/fcoe14:31
notmynamekeeping in mind my last statement, I think so14:32
dsockwellyou can mount whatever you want for the disk image storage, and iscsi is the accepted elastic block storage driver14:33
*** ostck has joined #openstack14:33
ziyadbso it's iscsi and fc is mostly off the table14:33
dsockwellpretty much14:33
dsockwelli might get a driver together for solaris fc, but i think development is focusing on iscsi14:34
ziyadbiscsi runs over ip, right?14:34
dsockwellunless i'm sorely mistaken, yes14:34
dsockwellit at least operates over ethernet14:35
ziyadbso no need for stupid specialized network gear14:35
dsockwelland if it weren't ip it would be called aoe or fcoe14:35
dsockwellno, just a good ethernet switch14:35
*** BoncOS has joined #openstack14:37
*** ziyadb is now known as imnotziyadb14:42
*** imnotziyadb has quit IRC14:48
*** ziyadb has joined #openstack15:03
*** zns has quit IRC15:07
*** BoncOS has quit IRC15:10
*** RobertLaptop has joined #openstack15:12
*** osier has joined #openstack15:19
*** Eyk^off is now known as Eyk15:24
*** BoncOS has joined #openstack15:26
*** ostck has quit IRC15:38
*** doude has quit IRC15:40
*** citral has joined #openstack15:56
*** BoncOS has quit IRC16:08
*** osier has quit IRC16:18
*** foxtrotgulf has joined #openstack16:31
*** kennethkalmer has quit IRC16:35
*** mgoldmann has joined #openstack16:42
*** MarkAtwood has joined #openstack17:06
*** foxtrotgulf has quit IRC17:20
*** obino has quit IRC17:21
*** Eyk is now known as Eyk^off17:27
*** miki has joined #openstack17:28
*** kennethkalmer has joined #openstack17:32
*** kennethkalmer has quit IRC17:42
*** obino has joined #openstack17:44
*** miki has left #openstack17:45
*** HugoKuo_ has joined #openstack18:04
*** cbeck1 has joined #openstack18:05
*** shentonfreude1 has joined #openstack18:05
*** sebastianstadil_ has joined #openstack18:05
*** jpipes has joined #openstack18:05
*** drogoh_ has joined #openstack18:06
*** thickski_ has joined #openstack18:06
*** sebastianstadil has quit IRC18:06
*** HugoKuo has quit IRC18:06
*** jaypipes has quit IRC18:06
*** aryan has quit IRC18:06
*** termie has quit IRC18:06
*** cbeck has quit IRC18:06
*** shentonfreude has quit IRC18:06
*** heden has quit IRC18:06
*** thickskin has quit IRC18:06
*** Xenith has quit IRC18:06
*** Eyk^off has quit IRC18:06
*** tahoe has quit IRC18:06
*** arreyder has quit IRC18:06
*** drogoh has quit IRC18:06
*** arun has quit IRC18:06
*** romans has quit IRC18:06
*** sebastianstadil_ is now known as sebastianstadil18:06
*** romans has joined #openstack18:06
*** heden has joined #openstack18:08
*** Xenith has joined #openstack18:09
*** termie has joined #openstack18:11
*** aryan has joined #openstack18:11
*** arun has joined #openstack18:12
*** RobertLaptop has quit IRC18:13
*** arreyder has joined #openstack18:13
*** jgb has quit IRC18:23
*** RobertLaptop has joined #openstack18:26
*** _william_ has left #openstack18:45
*** maplebed has joined #openstack18:49
*** konetzed has quit IRC18:50
*** obino has quit IRC18:53
*** drogoh_ is now known as drogoh18:54
*** maplebed has quit IRC19:00
*** citral has quit IRC19:01
*** konetzed has joined #openstack19:02
*** zenmatt has quit IRC19:12
*** stewart has joined #openstack19:14
*** stewart has quit IRC19:15
*** ejat has joined #openstack19:22
*** MarkAtwood has quit IRC19:25
*** koolhead17 has quit IRC19:31
*** kennethkalmer has joined #openstack19:35
*** obino has joined #openstack19:50
*** ejat has joined #openstack19:55
*** ejat has joined #openstack19:56
*** ejat has joined #openstack19:56
*** ejat has joined #openstack19:57
*** tahoe has joined #openstack20:02
*** AWR has joined #openstack20:06
*** obino has quit IRC20:15
*** sebastianstadil has quit IRC20:16
*** ejat has quit IRC20:19
*** zenmatt has joined #openstack20:21
*** ejat has joined #openstack20:22
*** BK_man[away] has quit IRC20:27
*** katkee has joined #openstack20:27
*** obino has joined #openstack20:29
*** mgoldmann has quit IRC20:35
*** ejat has quit IRC20:40
*** zenmatt has quit IRC20:43
*** ejat has joined #openstack20:48
*** ejat has joined #openstack20:48
*** BK_man has joined #openstack20:53
*** jero has joined #openstack21:05
*** arreyder has quit IRC21:19
*** arreyder has joined #openstack21:19
*** zenmatt has joined #openstack21:27
*** RobertLaptop has quit IRC21:28
*** z0 has quit IRC21:32
*** BK_man has quit IRC21:35
*** RobertLaptop has joined #openstack21:40
*** zenmatt has quit IRC21:43
*** julian_c has joined #openstack21:48
*** Eyk has joined #openstack21:48
*** zenmatt has joined #openstack21:56
*** katkee has quit IRC22:01
*** burris has quit IRC22:03
*** antenagora has joined #openstack22:05
*** HouseAway is now known as AimanA22:07
*** allsystemsarego has quit IRC22:08
*** antenagora has quit IRC22:11
*** RobertLaptop has quit IRC22:14
*** lucasnodine has joined #openstack22:23
*** Lucas_Nodine has quit IRC22:25
lucasnodineDocumentation question: at the section 2.2 states "Maximum length of all HTTP headers: 4096 bytes22:26
lucasnodine".  Does this mean that all headers (cumulative) must be 4kb or less or does it mean that each individual header must be 4kb or less, but that this rule applies to all of them?22:26
notmynameall total22:28
*** ShermanBoyd_ has joined #openstack22:28
notmynameyou shouldn't be able to store all of your data in the headers! ;-)22:29
*** ShermanBoyd has quit IRC22:29
*** ShermanBoyd_ is now known as ShermanBoyd22:29
notmynamehowever, if you want to up that limit (or lower it), I believe the value is stored in swift/common/consts.py22:29
lucasnodineah, really, thanks :D22:30
lucasnodineyea, I'm hoping to store the meta information for files in them and then not have 1 file for meta and another for data22:30
notmynameya, that's what it's for22:31
notmynamehowever, with no limit, it would be possible for a user to store the actual data in the headers and have a zero byte file22:32
lucasnodine*nod* very useful feature btw.  That has proven to be combersome with Couch22:32
notmynamenow, on disk, the metadata will always be stored in xattrs, but it may or may not be metadata on the original file22:32
notmynamethat is, on disk, some usage patterns can result in one file with the data and one file with the metadata. to the user, though, it's all one logical object/file22:33
notmynameso the user will never have to maintain a separate metadata file (unless they need more than the limit, of course)22:34
lucasnodinewell that's still fine.  I just hate having to make a reference for a meta and data files22:34
notmynameno need with swift :-)22:35
*** lborda has joined #openstack22:35
lucasnodinethat's excellent news :)22:35
*** lborda has quit IRC22:35
*** ziyadb_ has joined #openstack22:36
notmynamefor performance reasons, you may want to maintain your own indexes of containers (with some metadata per object). for example, you may want to sort on something other than object name or you may want to quickly list the objects in a container with a million files. both of these use cases are hard or impossible with swift22:36
notmynamewell, current versions of swift ;-)22:37
*** Ryan_Lane has joined #openstack22:37
lucasnodineYea, are there any swift-lucene setups yet? ;P22:37
lucasnodinethat woudl make it easier maybe *shrug*22:37
*** ziyadb has quit IRC22:39
*** zenmatt has quit IRC22:50
*** miclorb_ has joined #openstack22:53
*** julian_c has quit IRC22:57
*** hadrian has quit IRC23:12
*** hadrian has joined #openstack23:13
*** zenmatt has joined #openstack23:30
*** kennethkalmer has quit IRC23:41
*** zenmatt_ has joined #openstack23:41
*** zenmatt has quit IRC23:42
*** jeffjapan has joined #openstack23:50
*** obino has quit IRC23:52
*** Eyk has quit IRC23:55
*** zenmatt_ has quit IRC23:56
*** MarkAtwood has joined #openstack23:56

Generated by 2.14.0 by Marius Gedminas - find it at!