nelson__ | creiht: woot! | 00:14 |
---|---|---|
* jeremyb wonders what nelson__ is wooting | 00:15 | |
nelson__ | it's a secret, or you can read the scroll-back. | 00:15 |
jeremyb | can you have a different replica count per container or per account? | 00:15 |
jeremyb | i tried /lastlog | 00:15 |
nelson__ | (actually, I fucked up) | 00:16 |
nelson__ | (and now I'm wooting the unfuck) | 00:16 |
jeremyb | can you (cheaply) move containers from one account to another? | 00:16 |
nelson__ | are you asking me? I don't know, although I expect I eventually will. | 00:16 |
jeremyb | i'm asking the wind | 00:16 |
notmyname | jeremyb: there is a server-side copy that will move objects from one container to another. uses the cluster's internal network, so no buffering or proxying to the client | 00:18 |
jeremyb | notmyname: right i know about that... but can you just move a container? or alternatively (same result) change the owner for the whole container? | 00:18 |
notmyname | jeremyb: ah, sorry. just saw the word "account" | 00:18 |
notmyname | if you change the account, the hash for every object changes (and container), so everything would have to be replicated to the right node | 00:19 |
jeremyb | :/ | 00:19 |
notmyname | and replication can't do that now (it doens't rehash) | 00:19 |
jeremyb | notmyname: and replica count? | 00:20 |
*** EdwinGrubbs has quit IRC | 00:20 | |
*** zul has quit IRC | 00:20 | |
notmyname | it's global per cluster. we've talked about how to do a selectable redundancy, but it's tricky | 00:21 |
jeremyb | hrmmm | 00:21 |
* jeremyb is thinking about hacking in compression himself | 00:21 | |
notmyname | compression as an object streams in? | 00:21 |
jeremyb | or afterwards | 00:22 |
jeremyb | compressed on disk | 00:22 |
notmyname | it would be trivial to do as the object is PUT/fetched in middleware. I don't know if it's a good thing, but it would be simple to do | 00:23 |
notmyname | redbo has played around with some ideas there | 00:23 |
jeremyb | when's he usually around? | 00:23 |
notmyname | it depends a lot on the use case for the cluster | 00:23 |
notmyname | redbo is like the wind. he's always here | 00:24 |
jeremyb | well i'm migrating from a service that already does compression. some of our objects are bitmaps with big swaths of solid colors (e.g. html backgrounds) | 00:24 |
notmyname | so you are standing up a swift cluster to replace your current storage? | 00:25 |
jeremyb | yes, current storage is a custom service that sucks (we built it and no one else ever tested it) and requires all nodes to have a complete copy of all data | 00:26 |
Ryan_Lane | \o/ | 00:26 |
Ryan_Lane | looks like I had to wipe out my flat network and create a new network to switch network drivers | 00:26 |
notmyname | if you're the only use case, you could put some middleware in front that could compress based on object name (eg .endswith('.bmp')) | 00:27 |
jeremyb | well i would at least want the object to have an attribute indicating whether or not it's compressed | 00:28 |
jeremyb | so when reading back out it works no matter what | 00:28 |
notmyname | sounds like content-type/content-encoding works for that | 00:29 |
jeremyb | (e.g. we store some uncompressed and then add the middleware and store some more and then fetch them all) | 00:29 |
jeremyb | i wonder if we could then compress them just by doing an in place copy | 00:29 |
jeremyb | (the same way you change metadata on an object) | 00:30 |
notmyname | that's only needed for changing the content-type | 00:30 |
notmyname | and the copy happens in the proxy server, so the wsgi stack isn't processed again | 00:30 |
*** joearnold has quit IRC | 00:30 | |
notmyname | IOW: no | 00:30 |
jeremyb | i was momentarily looking for a user named IOW | 00:31 |
notmyname | heh | 00:31 |
notmyname | you should be able to change the content-encoding with a POST | 00:31 |
jeremyb | any idea where i might find history of past discussions on compression? or just wait for redbo? | 00:32 |
notmyname | talking to redbo would be best | 00:34 |
*** EdwinGrubbs has joined #openstack | 00:34 | |
*** EdwinGrubbs has joined #openstack | 00:34 | |
jeremyb | notmyname: how about replica count per container or per account? do you remember who was talking about that? | 00:36 |
*** reldan has quit IRC | 00:37 | |
gholt | colinnich: Good catch. I'll remerge from trunk. | 00:38 |
jeremyb | ahhh, gholt | 00:38 |
*** jc_smith has quit IRC | 00:50 | |
notmyname | jeremyb: it was most of us on the swift dev team | 00:54 |
jeremyb | notmyname: k. don't care about the object level, could live with acct, would prefer container level | 00:55 |
jeremyb | i may also try hacking that one but i think it's lower priority than compression | 00:56 |
*** rlucio has quit IRC | 01:03 | |
*** zul has joined #openstack | 01:25 | |
Xenith | I see that in the nova wiki, it says that larger instance types get some form of local storage. Is that local to the host system? Is it different than nova-volume? | 01:28 |
*** Jordandev has quit IRC | 01:32 | |
vishy | anyone familiar with launchpad ppas here? | 01:43 |
*** daleolds has quit IRC | 01:49 | |
*** dirakx has joined #openstack | 01:51 | |
eday | vishy: mtaylor would be a good one to ping too | 01:53 |
eday | vishy: or just in #launchpad | 01:53 |
vishy | i think i got this | 01:53 |
eday | ^ask | 01:53 |
uvirtbot | eday: Error: "ask" is not a valid command. | 01:53 |
*** dragondm has quit IRC | 01:55 | |
*** leted has quit IRC | 02:06 | |
*** leted has joined #openstack | 02:08 | |
mtaylor | vishy: what did I do? | 02:08 |
vishy | mtaylor: trying to figure out how to use this ppa stuff | 02:10 |
mtaylor | vishy: as in you want to upload new packages? or you want to use packages that are in it? | 02:10 |
vishy | i'm trying to build and upload packages | 02:11 |
mtaylor | ok. so the main thing is to create a source package (bzr bd -S should do the trick if you're using one of our packaging branches) | 02:11 |
vishy | i'm using bzr-builddeb | 02:12 |
vishy | -S is what i need? | 02:12 |
vishy | but i was getting an error about not having the orig.tar.gz | 02:12 |
mtaylor | great. yes - -S will tell it to make a source rather than binary package | 02:12 |
mtaylor | ah - you may need to do this then: | 02:13 |
mtaylor | bzr bd -S --builder='debuild -S -sa' | 02:13 |
mtaylor | -sa tells debuild to always include the .orig.tar.gz | 02:13 |
mtaylor | and now I have to run away for a bit... | 02:15 |
*** leted has quit IRC | 02:20 | |
*** maplebed has quit IRC | 02:29 | |
*** hadrian has quit IRC | 02:46 | |
*** hggdh has quit IRC | 02:54 | |
*** leted has joined #openstack | 02:55 | |
creiht | colinnich: yeah the swift-bench thing is fixed in current trunk now | 02:56 |
creiht | nelson__: yay | 02:56 |
creiht | jeremyb: no | 02:57 |
creiht | :) | 02:57 |
creiht | jeremyb: It is possible that selectable redundancy may be solved when we answer the multi-region replication stuff | 02:59 |
*** dirakx has quit IRC | 03:00 | |
creiht | jeremyb: Redbo did some testing, and I think that it was showing about only 7% compression for our use cases | 03:02 |
creiht | Most things are already compressed, or in a format that doesn't compress well (encrypted) | 03:02 |
creiht | If you have a lot of bmps with large color regions, maybe they should be stored in a better format? :) | 03:03 |
jeremyb | creiht: yeah, i'm also very interested in multiregion | 03:07 |
creiht | many people are :) | 03:07 |
creiht | It is a difficult problem | 03:07 |
jeremyb | hehe, maybe they should be in a better format! | 03:08 |
creiht | that said, you could probably do compression with middleware pretty easily | 03:08 |
jeremyb | right now i think they're bitmaps (uncompressed) wrapped in java serialized objects and then gzipped | 03:08 |
creiht | heh | 03:08 |
creiht | if they are gzipped, then they are already compressed right? | 03:08 |
*** leted has quit IRC | 03:11 | |
*** dirakx has joined #openstack | 03:18 | |
*** joearnold has joined #openstack | 04:00 | |
*** jimbaker has quit IRC | 04:09 | |
redbo | Transparent compression should be pretty easy, the difficult part is recognizing what files will compress without actually trying it out on them. | 04:14 |
jeremyb | redbo: i was thinking the client would tell you to compress or not | 04:17 |
jeremyb | during PUT | 04:17 |
redbo | that would be easy if you can rely on that | 04:18 |
redbo | But yeah if something's already gzipped, it won't really compress any more. | 04:19 |
jeremyb | right of course | 04:19 |
jeremyb | creiht: yeah... as long as we're migrating i'd rather unwrap them though. we can have metadata on the swift object to specify stuff like the version of the system that produced the image instead of having a java object to bundle the image and metadata and storing a serialization of the java object | 04:22 |
jeremyb | what metadata does the container server know for an object? | 04:22 |
jeremyb | vs. having to talk to a storage server to get metadata | 04:23 |
redbo | It should be pretty easy, but I haven't done any code or anything. So far I was just collecting info on how compressable our corpus of data is. | 04:23 |
jeremyb | (e.g. if i'm iterating over a whole container looking for key x=y) | 04:23 |
jeremyb | right | 04:23 |
jeremyb | what is that btw? | 04:23 |
jeremyb | i have images (currently wrapped in java serialized objects), email source (plain text with headers), and 3 other kinds of java serialized objects | 04:24 |
*** hggdh has joined #openstack | 04:24 | |
jeremyb | currently we just blanket compress all images and email source, not sure if we do or don't for the other types | 04:25 |
redbo | Oh, I was sampling stuff from our public service. I don't know what all is in there. | 04:26 |
jeremyb | k. you're racklabs right? | 04:26 |
*** guynaor has joined #openstack | 04:27 | |
*** guynaor has left #openstack | 04:27 | |
redbo | I'm on the Rackspace cloudfiles team, yeah. The racklabs name is kind of nebulous. | 04:29 |
jeremyb | i just saw people with that rdns | 04:31 |
jeremyb | hrmmm, do you work with mschuler? | 04:34 |
redbo | yep | 04:35 |
jeremyb | he came to DebConf10 | 04:35 |
jeremyb | he was the first or second way i heard about openstack | 04:36 |
*** adiantum has quit IRC | 04:38 | |
redbo | Cool. We're pretty much on the same team. | 04:38 |
jeremyb | redbo: so do you think there would be interest in having compression support in a release? | 04:42 |
redbo | yes. But I'm thinking the thing that chooses what to compress might need to be pluggable, since it probably varies a lot by use case. | 04:44 |
*** jdurgin has joined #openstack | 04:47 | |
jeremyb | i was thinking it would be specified outside swift (i.e. by the user or client app) | 04:47 |
jeremyb | and you could have a default set at the container/acct/global level | 04:49 |
jeremyb | and probably also allow middleware to meddle in the decision | 04:50 |
redbo | yeah, that's what I was just trying to figure out how to say. if it's a header a user can tack on and gets passed down to the storage server, middleware could also mutate that request based on whatever info it has. | 04:51 |
jeremyb | right | 04:51 |
jeremyb | and the header can be compress|don'ttouch|defer to default. the middleware probably should only meddle if it's defer | 04:52 |
jeremyb | also compression formats should be pluggable and i should be able to specify which one | 04:53 |
redbo | I think we'd want to have middleware that makes the whole decision. | 04:53 |
jeremyb | sure | 04:54 |
jeremyb | and if the middleware's not there just store with no meddling | 04:54 |
redbo | Makes sense | 04:54 |
jeremyb | (was typing) e.g. someone might care a lot about space and use something with 10% better ratio and someone else might just want to be able to send it through bit for bit to an HTTP client that supports gzip encoding | 04:54 |
*** joearnold has quit IRC | 04:56 | |
jeremyb | is there a contrib section for optional middleware a la django? | 04:56 |
redbo | We've just been putting things in swift/common/middleware/ and not making them part of the default config for now. | 04:57 |
redbo | but we'll have to do something more scalable if we keep coming up with these things | 04:58 |
jeremyb | heh | 04:59 |
* jeremyb now has some questions about the rackfiles stuff beyond swift... idk if you can answer | 05:01 | |
jeremyb | does it support compression on the wire? if there's a popular object does it compress once, cache, and reuse for some period of time? | 05:01 |
*** Abd4llA has joined #openstack | 05:03 | |
jeremyb | do people (rackspace and others) generally expose swift directly or put some kind of cache in front of it? (i was discussing varnish with someone elsewhere. for a use case with all public data and no need for bandwidth accounting) | 05:03 |
redbo | Yeah, we have caches in front of it. I don't know that much about it. Which is another place where Swift is sort of tailored to our use case and needs some improvement. | 05:06 |
redbo | I wouldn't expose it directly to public traffic | 05:06 |
jeremyb | ok, very good to know... | 05:07 |
redbo | Well, there's this silly thing where we don't use the Linux page cache, because we very rarely serve the same file from swift multiple times. It'd be really easy to turn off. | 05:09 |
creiht | jeremyb: If you are really interested in the compression stuff, then I would submit a blueprint, and that will help kick off a discussion | 05:09 |
jeremyb | creiht: k, i should probably get my own cluster going first, right? | 05:10 |
creiht | jeremyb: and yeah, for cloudfiles, most public content is sent out through the cdn | 05:10 |
creiht | jeremyb: or at least the saio | 05:10 |
jeremyb | saio? swift all in one? | 05:10 |
jeremyb | oh | 05:10 |
creiht | yup | 05:10 |
jeremyb | i thought you were saying through the cdn or at least the saio | 05:11 |
jeremyb | but i'm sorted now :) | 05:11 |
creiht | If we decide to enable public containers in cloudfiles, we will probably need to think a little more about that part | 05:11 |
redbo | yeah. I think there's a bug in lp to make that configurable. | 05:12 |
jeremyb | i think i heard this from your team but to double check: rackspace runs storage, container and account servers on all storage nodes and proxy servers are all on dedicated proxy nodes? so then there's a 3rd set of nodes caching/balancing in front of the proxies? is that 3rd set software or netscalers? can you say how that's set up? | 05:12 |
creiht | we have a load balancing layer in front of the proxies | 05:13 |
creiht | we are currently using zeus for that | 05:13 |
creiht | for public content, that is currently through the cdn integration | 05:13 |
jeremyb | so zeus is exposed directly to the outside DNS round robin'd? | 05:14 |
creiht | no | 05:14 |
creiht | but I really can't talk about that part :) | 05:14 |
jeremyb | heh | 05:15 |
jeremyb | so, zeus is for everything or just the cdn or ? | 05:15 |
creiht | zeus is in front of cloud files | 05:15 |
creiht | the cdn is via another provider | 05:15 |
creiht | we have a separate service that the cdn provider talks to to be able to pull content | 05:16 |
jeremyb | ok, so the cdn kinda goes in through the back door | 05:16 |
creiht | yes | 05:16 |
jeremyb | and "separate service" is closed service? | 05:16 |
jeremyb | err | 05:16 |
jeremyb | closed source* | 05:16 |
creiht | yeah we haven't opened that yet | 05:16 |
creiht | but it is pretty thin | 05:16 |
creiht | and a bit specific to our setup | 05:17 |
jeremyb | does it hit or bypass the proxies? | 05:17 |
jeremyb | (the swift proxies) | 05:17 |
redbo | Well, I don't think we will open it. Using ACLs to open swift to public traffic is a better solution. | 05:17 |
creiht | I'm pretty sure it goes through the proxies | 05:18 |
creiht | and yeah what redbo said | 05:18 |
creiht | It should be easy to put some caching layer in front of that | 05:18 |
jeremyb | right | 05:18 |
redbo | though it'd be nice... there's no reason why one box sitting in front of a giant cluster should speed up the access. It should be possible to do that caching work in the cluster. | 05:19 |
creiht | redbo: well I was thinking at least 2 boxes :P | 05:20 |
creiht | redbo: but that is an interesting idea as well | 05:20 |
redbo | yeah but one's just failover, right? | 05:20 |
redbo | or did you not allow for failover capacity??? | 05:21 |
jeremyb | hrmmmmmm, anyone done middleware to cache hot stuff in memcached? | 05:22 |
creiht | well the idea would be to route un-authed requests through a caching layer that would either server what is in cache, or pull from a public container | 05:22 |
creiht | I see your point though, if we had several of those machines, we would need a way for hot content to get to the whole layer | 05:23 |
creiht | so it is a little more difficult than simple :) | 05:24 |
creiht | My first thought was to either have a layer of varnish servers, or just run varnis on the proxies | 05:26 |
jeremyb | so where does most middleware live? in proxy server or all the servers? | 05:26 |
creiht | not sure yet | 05:26 |
creiht | well not sure if it would be middleware at the proxy level, or if it would be a separate set of servers | 05:27 |
creiht | that could be anywhere | 05:27 |
jeremyb | not necessarily for caching, middleware in general | 05:27 |
jeremyb | can you put it anywhere you want? | 05:27 |
creiht | ahh... yeah pretty much all middleware goes in front of the proxy | 05:27 |
creiht | redbo: or we could have a separate storage url for public content (like a simple cdn) that would go through a cache | 05:28 |
creiht | it just wouldn't have edges distributed around the globe | 05:29 |
*** ArdRigh has joined #openstack | 05:30 | |
creiht | hrm... is varnish just in memory? | 05:30 |
jeremyb | once multi region repl. is finished will rackfiles support being 1 replica of a private cluster? | 05:30 |
creiht | jeremyb: that has not been decided yet | 05:30 |
* jeremyb definitely would use that | 05:30 | |
jeremyb | nelson__: Ryan_Lane: you should read the last 1/2 hr at some point | 05:32 |
jeremyb | creiht: redbo notmyname: thanks! | 05:32 |
creiht | hrm... the varnish docs are a bit difficult to navigate | 05:38 |
creiht | ok sounds like it does use disk to cache | 05:40 |
jeremyb | memory too? | 05:41 |
creiht | yeah | 05:41 |
jeremyb | through filesystem cache or some other way? | 05:41 |
creiht | not entirely sure | 05:41 |
creiht | http://www.varnish-cache.org/trac/wiki/ArchitecturePersistentStorage | 05:41 |
creiht | Is what I just started reading | 05:41 |
* jeremyb looks | 05:42 | |
*** arcane has quit IRC | 05:43 | |
jeremyb | creiht: /mem/i appears once on that page and not in our context | 05:43 |
pandemicsyn | creiht: it uses both | 05:44 |
creiht | pandemicsyn: yeah that is what it seems like | 05:44 |
pandemicsyn | its been awhile but it uses disk as cold storage and ram for hot i think | 05:44 |
jeremyb | pandemicsyn: both what? filesys cache and heap? | 05:44 |
creiht | their docs can be a bit difficult to follow :) | 05:44 |
pandemicsyn | last time i used it apache 2.1 was still cool so it might be different now ;) | 05:45 |
creiht | http://en.wikipedia.org/wiki/Varnish_(software) | 05:45 |
jeremyb | http://identi.ca/notice/56504214 http://www.mediawiki.org/wiki/Manual:Varnish_caching http://wikitech.wikimedia.org/view/Varnish (reading those wiki pages now) | 05:51 |
creiht | jeremyb: http://trafficserver.apache.org/ | 05:55 |
creiht | is another interesting option | 05:55 |
jeremyb | creiht: oooooh, never heard of this before | 05:56 |
creiht | traffic server actually looks a bit intriquing for this use case | 06:04 |
jeremyb | well i can't really think about it now, nearly asleep! | 06:07 |
creiht | hehe.. me too | 06:08 |
creiht | have a good weekend | 06:08 |
jeremyb | you too | 06:12 |
redbo | creiht: I'm pretty sure varnish just mmaps the files, so they just use the kernel's page cache (LRU or whatever). | 06:18 |
anticw | it does | 06:19 |
redbo | creiht: but I was saying there's no reason we couldn't do the same sort of thing on our storage nodes. There's nothing magical about Varnish, it just keeps files on ram/disk. | 06:23 |
*** jdurgin has quit IRC | 06:42 | |
*** kashyapc has joined #openstack | 07:13 | |
*** skrusty has quit IRC | 07:31 | |
*** skrusty has joined #openstack | 07:45 | |
*** gasbakid has joined #openstack | 07:49 | |
*** reldan has joined #openstack | 08:11 | |
*** trin_cz has joined #openstack | 08:18 | |
*** gasbakid__ has joined #openstack | 08:31 | |
*** gasbakid__ has quit IRC | 08:33 | |
*** gasbakid has quit IRC | 08:39 | |
*** gasbakid__ has joined #openstack | 08:39 | |
*** allsystemsarego has joined #openstack | 09:17 | |
*** gasbakid__ has quit IRC | 09:38 | |
*** gasbakid has joined #openstack | 09:39 | |
*** kashyapc has quit IRC | 09:55 | |
*** Abd4llA has quit IRC | 10:43 | |
*** adiantum has joined #openstack | 11:17 | |
*** Abd4llA has joined #openstack | 11:35 | |
*** fabiand_ has joined #openstack | 11:43 | |
*** ArdRigh has quit IRC | 11:47 | |
*** Abd4llA has quit IRC | 11:47 | |
*** omidhdl has joined #openstack | 12:07 | |
*** opengeard has quit IRC | 12:09 | |
*** vanne has joined #openstack | 12:19 | |
*** omidhdl has quit IRC | 12:30 | |
*** opengeard has joined #openstack | 12:30 | |
*** fabiand_ has quit IRC | 13:04 | |
*** fabiand_ has joined #openstack | 13:04 | |
*** fabiand_ has quit IRC | 13:09 | |
*** adiantum has quit IRC | 13:29 | |
*** allsystemsarego has quit IRC | 14:06 | |
*** MarkAtwood has quit IRC | 14:09 | |
*** hadrian has joined #openstack | 14:10 | |
*** MarkAtwood has joined #openstack | 14:15 | |
*** fabiand_ has joined #openstack | 14:26 | |
*** dabo_ has joined #openstack | 14:32 | |
*** dabo_ has quit IRC | 14:34 | |
*** alekibango has quit IRC | 14:35 | |
*** alekibango has joined #openstack | 14:38 | |
*** gasbakid has quit IRC | 14:39 | |
*** vanne has left #openstack | 14:40 | |
*** joearnold has joined #openstack | 14:47 | |
*** alekibango has quit IRC | 15:08 | |
*** dirakx has quit IRC | 15:28 | |
*** noagendamarket has joined #openstack | 15:28 | |
*** joearnold has quit IRC | 15:45 | |
*** reldan has quit IRC | 15:45 | |
*** reldan has joined #openstack | 15:47 | |
*** drico has joined #openstack | 16:00 | |
*** joearnold has joined #openstack | 16:03 | |
*** trin_cz has quit IRC | 16:09 | |
*** ken_barber has joined #openstack | 16:27 | |
*** michael has joined #openstack | 16:28 | |
*** michael is now known as Guest50842 | 16:29 | |
*** ken_barber has quit IRC | 16:29 | |
*** alekibango has joined #openstack | 16:31 | |
openstackhudson | Project nova build #371: STILL FAILING in 9.7 sec: http://hudson.openstack.org/job/nova/371/ | 16:33 |
*** reldan has quit IRC | 16:40 | |
*** noagendamarket has left #openstack | 16:40 | |
*** Guest50842 has quit IRC | 16:40 | |
*** reldan has joined #openstack | 16:44 | |
*** fabiand_ has quit IRC | 16:54 | |
*** jfluhmann has joined #openstack | 17:13 | |
*** rogue780 has joined #openstack | 17:17 | |
*** fabiand_ has joined #openstack | 17:25 | |
*** joearnold has quit IRC | 17:31 | |
*** hggdh has quit IRC | 17:35 | |
*** allsystemsarego has joined #openstack | 17:41 | |
*** fabiand_ has quit IRC | 17:52 | |
*** fabiand_ has joined #openstack | 19:03 | |
*** opengeard has quit IRC | 19:15 | |
*** fabiand_ has quit IRC | 19:25 | |
*** pothos has quit IRC | 19:25 | |
*** cw has quit IRC | 19:25 | |
*** jfluhmann_ has quit IRC | 19:25 | |
*** filler has quit IRC | 19:25 | |
*** ksteward has quit IRC | 19:25 | |
*** jeremyb has quit IRC | 19:25 | |
*** hggdh has joined #openstack | 19:25 | |
*** fabiand_ has joined #openstack | 19:26 | |
*** pothos has joined #openstack | 19:26 | |
*** cw has joined #openstack | 19:26 | |
*** jfluhmann_ has joined #openstack | 19:26 | |
*** filler has joined #openstack | 19:26 | |
*** ksteward has joined #openstack | 19:26 | |
*** jeremyb has joined #openstack | 19:26 | |
*** pothos has quit IRC | 19:26 | |
*** pothos has joined #openstack | 19:26 | |
*** fabiand_ has quit IRC | 19:40 | |
vishy | mtaylor, soren, dendrobates, ttx: anyone around? | 19:56 |
vishy | tying to figure out how to upload an unchanged package to a ppa. Is it even possible? | 19:58 |
vishy | the specific issue I'm having is that nova needs a newer version of sphinx to build properly. I'm trying to upload the version of sphinx from the nova-ppa | 20:00 |
vishy | i can download the package via apt-get source sphinx, but I'm not sure how to upload it to the ppa without rebuilding and generating a changes file | 20:01 |
*** fabiand_ has joined #openstack | 20:03 | |
*** herki_ is now known as herki | 20:19 | |
vishy | ok i think i may have this issue solved by using a build dependency | 20:21 |
*** fabiand_ has quit IRC | 20:22 | |
vishy | and there is a ui for copying packages, nice | 20:24 |
dendrobates | vishy: I'm around now | 20:36 |
vishy | dendrobates: i think i figured out everything i needed | 20:36 |
vishy | dendrobates: i'm curious how hudson handles the changelogs when it builds a new package from the source tree | 20:37 |
dendrobates | vishy: di you find dput? That is what I use to upload | 20:37 |
vishy | dendrobates: yeah got that part worked out | 20:37 |
dendrobates | vishy: I'm not sure hudson is a mystery to me | 20:38 |
vishy | dendrobates: any is there a way to specify in a changelog or with builddeb to use multiple targets? or do you need three separate changelogs? | 20:39 |
vishy | building for lucid/maverick/natty for example | 20:39 |
dendrobates | You only need one changelog you can configure the ppa to build different targets | 20:40 |
dendrobates | vishy: or maybe that's the recipe, the have made changes and the recipes are new | 20:43 |
jeremyb | recipes are a lp.net thing? | 20:45 |
*** Abd4llA has joined #openstack | 20:48 | |
*** leted has joined #openstack | 20:52 | |
*** Abd4llA has quit IRC | 21:02 | |
*** codehotter has joined #openstack | 21:10 | |
codehotter | hi. Can I get started with openstack, just because it's cool, if I only have a single server? (our company will grow to millions of servers in the futer, I'm sure!) | 21:15 |
Jbain | I believe there is a page on the wiki about how to install it on a single box, lemme look for the link | 21:17 |
jeremyb | openstack has 2 mostly separate projects... are you talking about swift or nova? (file / object storage or virtualization?) | 21:17 |
codehotter | Why not both? How feasible is it? What performance overhead will I pay? | 21:18 |
codehotter | How integrated is Openstack Compute with the hypervisors it supports? How much effort is it to support an additional hypervisor? | 21:19 |
soren | vishy: I'm around now. | 21:19 |
soren | vishy: What's up? | 21:19 |
Jbain | I would imagine it would take a significant amount of coding to add support for a new hypervisor | 21:19 |
soren | Depends on the hypervisor, really. | 21:20 |
Jbain | have you integrate it with the API | 21:20 |
codehotter | Are there any numbers on performance overhead? And any documents on architectural design? | 21:20 |
codehotter | (I've browsed the wiki for about 30 minutes) | 21:20 |
soren | vishy: You don't strictly need three separate changelogs. | 21:20 |
soren | vishy: What really matters is the "Distribution: " line in the .changes file. This is based on what's in debian/changelog, but can be overridden. | 21:21 |
Jbain | soren: HyperV is in the process of being added isn't it? | 21:21 |
soren | Jbain: It's in. | 21:22 |
soren | Jbain: Since a couple of days ago. | 21:22 |
Jbain | sweet | 21:22 |
*** Abd4llA has joined #openstack | 21:22 | |
Jbain | I've been lurking around here for a while, waiting for a few more paychecks before I get some equipment to finally be able to play around with this stuff :) | 21:23 |
jeremyb | hyperv?! | 21:23 |
jeremyb | you must be kidding | 21:23 |
Jbain | i'm not a huge fan of it, but the higher-ups like windows >.> | 21:23 |
jeremyb | uhhhhh | 21:24 |
codehotter | sigh | 21:24 |
jeremyb | no hyperv | 21:24 |
Jbain | lol | 21:24 |
jeremyb | on my boxen anyway! | 21:24 |
Jbain | I would stick with Xen for my stuff most likely | 21:24 |
Jbain | but I'd like to try it with hyperV | 21:24 |
codehotter | Where can I find docuents on openstack's architecture? How does it work? What does it do? | 21:24 |
Jbain | jeremyb: your nick is weirding me out, man | 21:25 |
Jbain | lol | 21:25 |
alekibango | codehotter: search wiki for word architecture... there are some images, descriptions and links :) | 21:25 |
alekibango | look also into nove devel docs, there is nice ascii art image | 21:25 |
Jbain | <3 ascii art | 21:26 |
alekibango | codehotter: my images of arch are there: http://alekiba.dyndns.org/nova/ | 21:26 |
codehotter | alekibango: thanks, that's exactly what I was looking for | 21:26 |
notmyname | codehotter: swift architecture can be found at swift.openstack.org | 21:26 |
alekibango | bevare - some of them are not 100% accurate | 21:26 |
alekibango | C[12] | 21:27 |
alekibango | they are just possible futures | 21:27 |
alekibango | codehotter: there are some diffrerent nova services,which all can run on different machines | 21:27 |
jeremyb | Jbain: what's your name? | 21:29 |
Jbain | Jeremy Bain | 21:30 |
jeremyb | uhuh | 21:30 |
Jbain | well, Bain is my middle name, but w/e | 21:31 |
alekibango | family reunion again, hehe | 21:31 |
Jbain | srs | 21:33 |
mtaylor | vishy: heythere | 21:48 |
mtaylor | vishy, dendrobates: I don't use recipes myself | 21:49 |
mtaylor | vishy: and yeah - you need separate changelogs for each distro target (lucid/maverick/etc) ... recipes can help you generate the different source packages, but I usually do it by hand because it's just easier | 21:49 |
mtaylor | vishy: oh - well, believe soren rather than me re: .changes vs. debian/changelog | 21:50 |
dendrobates | mtaylor: yeah the recipes are new but they do a lot of that for you. | 21:50 |
mtaylor | dendrobates: they do - I just find that they do 20 things I don't want and almost nothing that I do want | 21:51 |
*** reldan has quit IRC | 21:51 | |
*** reldan has joined #openstack | 21:53 | |
Xenith | I see that in the nova wiki, it says that larger instance types get some form of local storage. Is that local to the host system? Is it different than nova-volume? | 21:55 |
*** hadrian has quit IRC | 22:01 | |
*** reldan has quit IRC | 22:06 | |
*** trin_cz has joined #openstack | 22:09 | |
*** reldan has joined #openstack | 22:11 | |
*** reldan has quit IRC | 22:13 | |
*** reldan has joined #openstack | 22:14 | |
*** Abd4llA has quit IRC | 22:31 | |
*** allsystemsarego has quit IRC | 22:54 | |
*** drico_ has joined #openstack | 23:00 | |
*** drico has quit IRC | 23:01 | |
*** drico has joined #openstack | 23:17 | |
*** drico_ has quit IRC | 23:19 | |
*** rogue780 has quit IRC | 23:39 | |
*** rogue780 has joined #openstack | 23:47 | |
*** rogue780 has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!