vishy | non-sdk version | 00:00 |
---|---|---|
Ryan_Lane | cloudfusion was the old library | 00:00 |
vishy | i mean no one else has written a php library? | 00:00 |
Ryan_Lane | it is now the official version ;) | 00:00 |
vishy | i see | 00:00 |
Ryan_Lane | I'm going to try the old version and see if it works | 00:00 |
vishy | k | 00:00 |
Ryan_Lane | though they dropped all the damn documentation | 00:00 |
vishy | it might be helpful to find the exact xml being sent in | 00:01 |
Ryan_Lane | I may be able to get that | 00:01 |
vishy | i'm guessing there might be an extra field or two that should be ignored in the signature calculation | 00:01 |
vishy | i'm heading out for the moment, but I would check the xml sent in by that vs euca tools for the same command and check for extra fields. We might need to add a couple of ignore's in the sig calc. | 00:03 |
Ryan_Lane | any easy way to get the xml? | 00:04 |
vishy | sure stick a logging statement in the api | 00:04 |
vishy | it printed out at one point | 00:05 |
Ryan_Lane | ah ok | 00:05 |
vishy | method in api/ec2/__init__.py | 00:07 |
vishy | line 100 | 00:07 |
Ryan_Lane | awesome. thanks | 00:07 |
Ryan_Lane | it looks like the old sdk works | 00:07 |
vishy | cool | 00:08 |
vishy | the request params are all there | 00:08 |
Ryan_Lane | at least from the signature POV | 00:08 |
vishy | perhaps the ignore list is ignoring one that it shouldn't | 00:08 |
vishy | or another one needs to be added to the ignore list | 00:08 |
Ryan_Lane | if I can figure it out, I'll submit a bug with a fix | 00:08 |
vishy | if you could print out all the request commands, you'll probably see a mismatch in the api vs boto | 00:09 |
vishy | cool thanks | 00:09 |
vishy | good luck! | 00:09 |
Ryan_Lane | thanks :) | 00:09 |
*** arcane has joined #openstack | 00:10 | |
*** mtaylor has joined #openstack | 00:13 | |
*** mtaylor has joined #openstack | 00:13 | |
*** ChanServ sets mode: +v mtaylor | 00:13 | |
*** kashyapc has quit IRC | 00:16 | |
*** kashyapc has joined #openstack | 00:17 | |
*** sanjib has quit IRC | 00:19 | |
*** Grizzletooth_ has joined #openstack | 00:19 | |
*** kevnfx has joined #openstack | 00:19 | |
*** Grizzletooth has quit IRC | 00:22 | |
*** kashyapc has quit IRC | 00:27 | |
*** daleolds has quit IRC | 00:27 | |
*** kashyapc has joined #openstack | 00:28 | |
*** DubLo7 has joined #openstack | 00:29 | |
*** dysinger has quit IRC | 00:30 | |
*** kashyapc has quit IRC | 00:34 | |
*** metcalfc has quit IRC | 00:48 | |
*** kashyapc has joined #openstack | 00:51 | |
*** ar1 has joined #openstack | 00:54 | |
*** Grizzletooth has joined #openstack | 01:00 | |
*** joearnold has quit IRC | 01:02 | |
*** jc_smith has quit IRC | 01:02 | |
*** Grizzletooth is now known as Grizzletooth_ | 01:05 | |
*** Grizzletooth_ has quit IRC | 01:07 | |
*** Grizzletooth has joined #openstack | 01:07 | |
*** Grizzletooth has joined #openstack | 01:08 | |
*** hazmat has quit IRC | 01:09 | |
*** Grizzletooth has quit IRC | 01:19 | |
*** huz has joined #openstack | 01:20 | |
Ryan_Lane | vishy: the aws sdk, in string_to_sign, is sending <hostname>, but the api is expecting <hostname>:<port> | 01:22 |
*** gustavomzw has quit IRC | 01:34 | |
*** abecc has joined #openstack | 01:39 | |
Ryan_Lane | vishy: I'm thinking this may actually be a bug in the aws framework. they send one thing, and sign another | 01:39 |
*** abecc has quit IRC | 01:39 | |
*** schisamo has quit IRC | 01:40 | |
*** iammartian_ has joined #openstack | 01:43 | |
*** gaveen has joined #openstack | 02:09 | |
*** gaveen has joined #openstack | 02:09 | |
*** daleolds has joined #openstack | 02:15 | |
*** silassewell has joined #openstack | 02:22 | |
*** sophiap has quit IRC | 02:24 | |
huz | hi, i came up some problems when tried to config multi server. | 02:28 |
huz | when i user swift-auth-add-user, the command line say:Update failed: 503 service Unavailable | 02:30 |
huz | the syslog : Oct 19 10:32:12 swift01 auth-server ERROR Unhandled exception in ReST request: #012Traceback (most recent call last):#012 File "/home/huz/swift/trunk/swift/auth/server.py", line 605, in handleREST#012 response = handler(req)#012 File "/home/huz/swift/trunk/swift/auth/server.py", line 467, in handle_add_user#012 create_account_admin, create_reseller_admin)#012 File "/home/huz/swift/trunk/swift/auth/server.py", line 330, in create_user#012 | 02:33 |
huz | account_hash = self.add_storage_account()#012 File "/home/huz/swift/trunk/swift/auth/server.py", line 198, in add_storage_account#012 (token, time()))#012 File "/home/huz/swift/trunk/swift/common/db.py", line 81, in execute#012 return self._timeout(lambda: sqlite3.Connection.execute(#012 File "/home/huz/swift/trunk/swift/common/db.py", line 74, in _timeout#012 return call()#012 File "/home/huz/swift/trunk/swift/common/db.py", line 82, in <lambd | 02:33 |
huz | a>#012 self, *args, **kwargs))#012OperationalError: attempt to write a readonly database | 02:33 |
*** sophiap has joined #openstack | 02:34 | |
*** taihen has quit IRC | 02:36 | |
*** taihen has joined #openstack | 02:37 | |
creiht | huz: sounds like a permission problem? | 02:39 |
Ryan_Lane | vishy: sent a patch, in case others ask: http://developer.amazonwebservices.com/connect/thread.jspa?threadID=53117 | 02:40 |
*** taihen has quit IRC | 02:44 | |
*** taihen has joined #openstack | 02:44 | |
huz | creiht: the proxy service and the Ring only need to start in only 1 server, is that correct? | 02:45 |
creiht | yes | 02:45 |
*** gundlach has quit IRC | 02:46 | |
creiht | I would check the ownership/permissions of the mounts on the storage nodes | 02:46 |
huz | ok, i'll check that.Another question is about the step 10 in SAIO | 02:52 |
huz | it's for devVM, in my situation(5 storage nodes), do i need to configure /etc/rsyncd.conf in one server or in all 5 servers? | 02:53 |
*** krish has joined #openstack | 02:53 | |
creiht | all 5 servers will need an rsyncd.conf | 02:54 |
creiht | but it would be better to reference the rsyncd.conf in swift/etc/rsyncd.conf.sample | 02:54 |
creiht | (from the source tree) | 02:54 |
creiht | the rsyncd.conf from the saio is setup specifically to handle the fact that you are running 4 nodes simultaneously on the same machine (which is not the norm) | 02:55 |
huz | ahh, my rsyncd.conf configurations seem ok, thanks for confirmation. | 02:57 |
creiht | cool | 02:57 |
*** jc_smith has joined #openstack | 03:24 | |
*** kashyapc has quit IRC | 03:39 | |
*** joearnold has joined #openstack | 03:53 | |
*** sophiap has quit IRC | 03:54 | |
*** sirp1 has quit IRC | 04:06 | |
DesiJat | give him the once over | 04:10 |
DesiJat | oops | 04:10 |
*** exlt has quit IRC | 04:13 | |
*** kashyapc has joined #openstack | 04:13 | |
*** exlt has joined #openstack | 04:14 | |
*** ChanServ sets mode: +v exlt | 04:14 | |
*** silassewell has quit IRC | 04:15 | |
*** kashyapc has quit IRC | 04:18 | |
*** kashyapc has joined #openstack | 04:18 | |
*** Sanjib has joined #openstack | 04:23 | |
*** Sanjib has quit IRC | 04:23 | |
*** exelanz has joined #openstack | 04:24 | |
*** Grizzletooth has joined #openstack | 04:25 | |
exelanz | if someone knows of a good resource to understand the load balancing capabilities of nova, | 04:25 |
exelanz | please guide me to that. | 04:25 |
*** exelanz has quit IRC | 04:28 | |
*** ArdRigh has quit IRC | 04:29 | |
*** joearnold has quit IRC | 04:49 | |
*** joearnold has joined #openstack | 04:51 | |
*** f4m8_ is now known as f4m8 | 04:51 | |
*** miclorb_ has quit IRC | 05:07 | |
*** DubLo7 has quit IRC | 05:22 | |
*** kevnfx has quit IRC | 05:27 | |
*** jc_smith has quit IRC | 05:35 | |
*** ArdRigh has joined #openstack | 05:36 | |
*** daleolds has quit IRC | 05:37 | |
*** suchitp has joined #openstack | 05:40 | |
*** suchitp has left #openstack | 05:50 | |
*** Grizzletooth has quit IRC | 05:52 | |
*** joearnold has quit IRC | 05:53 | |
*** miclorb has joined #openstack | 05:58 | |
*** iammartian_ has quit IRC | 06:01 | |
*** krish has quit IRC | 06:19 | |
*** Cybo has joined #openstack | 06:22 | |
*** Cybo has quit IRC | 06:24 | |
*** arthurc has joined #openstack | 06:25 | |
*** Cybodog has quit IRC | 06:25 | |
*** Cybodog has joined #openstack | 06:26 | |
*** BK_man has quit IRC | 06:27 | |
*** ibarrera has joined #openstack | 06:29 | |
*** Cybodog has quit IRC | 06:31 | |
*** Cybodog has joined #openstack | 06:32 | |
*** krish has joined #openstack | 06:32 | |
*** Cybodog has quit IRC | 06:33 | |
*** coolhandluke has joined #openstack | 06:47 | |
*** miclorb has quit IRC | 07:00 | |
*** omidhdl has joined #openstack | 07:17 | |
*** gaveen has quit IRC | 07:19 | |
*** calavera has joined #openstack | 07:27 | |
*** coolhandluke has quit IRC | 07:37 | |
*** coolhandluke has joined #openstack | 07:42 | |
*** miclorb has joined #openstack | 07:43 | |
*** allsystemsarego has joined #openstack | 07:52 | |
*** allsystemsarego has joined #openstack | 07:52 | |
*** kakoni has joined #openstack | 08:20 | |
*** befreax has joined #openstack | 08:24 | |
*** kashyapc has quit IRC | 08:28 | |
*** irahgel has joined #openstack | 08:28 | |
*** coolhandluke has quit IRC | 08:31 | |
*** hornbeck has quit IRC | 08:47 | |
*** befreax has quit IRC | 08:50 | |
*** befreax has joined #openstack | 08:52 | |
*** ptremblett has quit IRC | 08:55 | |
*** hornbeck has joined #openstack | 08:56 | |
*** kakoni has quit IRC | 08:56 | |
zykes- | alekibango: around ? | 09:00 |
*** tonywolf has joined #openstack | 09:01 | |
alekibango | zykes-: yes | 09:12 |
zykes- | any documentation to follow or to use as a basis when doing a multinode install ? | 09:13 |
alekibango | http://etherpad.openstack.org/NovaMultinodeInstall | 09:14 |
alekibango | this is best atm | 09:14 |
alekibango | plz help editing | 09:14 |
zykes- | when you mean multindoe | 09:17 |
zykes- | node, you mean splitting up all the diff comps ? | 09:17 |
*** tonywolf has left #openstack | 09:17 | |
*** gaveen has joined #openstack | 09:25 | |
*** gaveen has joined #openstack | 09:25 | |
alekibango | multinode means installing on 2+ hardware servers | 09:28 |
alekibango | i would like to cover examples for 4 servers in one cluster -- and 30+ servers in different clusters | 09:28 |
alekibango | single server install of nova is much easier :) | 09:29 |
*** BK_man has joined #openstack | 09:30 | |
*** huz has quit IRC | 09:32 | |
*** jonwood has joined #openstack | 09:33 | |
*** tonywolf has joined #openstack | 09:36 | |
zykes- | alekibango: i can try to do a virtual install of a 4 ndoe one | 09:36 |
alekibango | i will try too | 09:37 |
alekibango | now i am drawing pictures of nova arch, :) | 09:37 |
zykes- | isn't there that allready ? | 09:38 |
zykes- | the graph that came earlier. | 09:38 |
zykes- | http://term.ie/data/nova.png | 09:39 |
arthurc | Hi alekibango, can we install compute nodes on virtual machines ? are they not running kvm ? | 09:41 |
jonwood | arthurc: They'll work with UML as well, or a fake virtulisation driver. | 09:42 |
alekibango | yes if you will use QEMU or UML | 09:42 |
jonwood | And good morning :) | 09:42 |
alekibango | its noon here :) | 09:42 |
alekibango | but have nice 24/7 :) | 09:42 |
*** matclayton has joined #openstack | 09:47 | |
*** Robi__ has joined #openstack | 09:49 | |
*** jonwood has quit IRC | 09:49 | |
*** miclorb has quit IRC | 09:49 | |
*** klumpie has quit IRC | 09:49 | |
*** morfeas has quit IRC | 09:49 | |
*** seats has quit IRC | 09:49 | |
*** xtoddx has quit IRC | 09:49 | |
*** eday has quit IRC | 09:49 | |
*** sandywalsh has quit IRC | 09:49 | |
*** gholt has quit IRC | 09:49 | |
*** creiht has quit IRC | 09:49 | |
*** brainproxy has quit IRC | 09:50 | |
*** Robi_ has quit IRC | 09:50 | |
*** mattt has quit IRC | 09:50 | |
*** karmabot` has quit IRC | 09:50 | |
*** karmabot has joined #openstack | 09:50 | |
*** _morfeas has joined #openstack | 09:50 | |
*** creiht_ has joined #openstack | 09:50 | |
*** jonwood has joined #openstack | 09:50 | |
*** mattt has joined #openstack | 09:50 | |
*** miclorb_ has joined #openstack | 09:50 | |
*** klumpie has joined #openstack | 09:50 | |
*** seats has joined #openstack | 09:50 | |
*** eday has joined #openstack | 09:51 | |
*** xtoddx has joined #openstack | 09:51 | |
*** gholt has joined #openstack | 09:53 | |
zykes- | can one manage san disks with openstack ? | 09:55 |
jonwood | zykes-: What sense? | 09:56 |
jonwood | *In* what sense? | 09:57 |
*** jonwood has quit IRC | 10:02 | |
*** jonwood has joined #openstack | 10:04 | |
*** sandywalsh has joined #openstack | 10:05 | |
*** brainproxy has joined #openstack | 10:07 | |
zykes- | jonwood: in the sense of managing disks. | 10:07 |
zykes- | or storage on nodes | 10:07 |
jonwood | As far as I can see at the moment the only supported storage method for disks is on your filesystem as disk images. | 10:08 |
jonwood | But there's no reason you can't mount SAN storage and tell it to put disk images there. | 10:08 |
alekibango | i would think sheepdog is interesting | 10:08 |
jonwood | alekibango: It is if you don't already have a SAN installed. | 10:09 |
jonwood | If you've spend tens of thousands of dollars on storage already, you'll want to use that ;) | 10:09 |
alekibango | :) | 10:09 |
alekibango | i do not want to spend those | 10:09 |
alekibango | i would like to avoid the server tax | 10:09 |
alekibango | if you know what i mean | 10:10 |
alekibango | .. but still have it nice, fast, reliable | 10:10 |
zykes- | how to best use shared storage now then ? | 10:11 |
jonwood | zykes-: Mount it as part of your filesystem | 10:12 |
alekibango | zykes-: i would think glusterfs is fitting for image store -- but might be slow for live (used) disks | 10:12 |
alekibango | zykes-: i think for instance disks it will be best to implement sheepdog support for KVM (qemu) hypervizors | 10:13 |
zykes- | doesn't anyone here use san for disks ? | 10:14 |
alekibango | jdarcy will be here in 4 hours, ask him :) | 10:14 |
alekibango | zykes-: they for sure do | 10:14 |
zykes- | how is that implemented then i wondear | 10:14 |
alekibango | zykes-: for now, disk for instances are created on local disks using lvm -- am i right ? | 10:15 |
jonwood | zykes-: As I said, you mount storage from your SAN as part of the filesystem. | 10:15 |
jonwood | alekibango: No. At the moment instance disks are created as files wherever you tell nova-compute to put them. | 10:16 |
alekibango | zykes-: there are big problems with storage those days... try googling 'silent data corruption' :) | 10:16 |
alekibango | jonwood: thanks, for now i used only fake storage | 10:17 |
*** gustavomzw has joined #openstack | 10:17 | |
alekibango | that will change soon | 10:17 |
zykes- | alekibango: yeah, that's the way i've been doing stuff lately | 10:20 |
zykes- | what's the use of Sheepdog if you use a san ? | 10:22 |
jonwood | zykes-: There isn't one really. | 10:23 |
*** befreax has quit IRC | 10:23 | |
zykes- | i guess sheepdog is more useful to distribute data across nodes .. | 10:23 |
alekibango | yes, san can be less effective in resource usage :) | 10:23 |
alekibango | (depends on arch) | 10:23 |
zykes- | hmm | 10:24 |
jonwood | alekibango: I doubt sheepdog is going to get close to the performance of a decent SAN. | 10:24 |
zykes- | it reminds me of Panasas | 10:24 |
zykes- | in some way or "pnfs" | 10:24 |
alekibango | panasas? | 10:24 |
zykes- | yeah, hpc storage | 10:24 |
zykes- | or i mean, in the way of replication etc | 10:24 |
alekibango | pnfs sucks :) | 10:24 |
zykes- | heh, well it's easy to administer and gives high performance on clusters + storage cap. | 10:25 |
alekibango | zykes-: maybe :) | 10:26 |
zykes- | like add x tb of storage, add a shelf and do one command and you're done. | 10:26 |
alekibango | how it performs with large files (disk images used by vguests)? | 10:27 |
zykes- | it's not for that kind of usage as far as I know | 10:28 |
zykes- | it's meant for HPC clusters | 10:28 |
alekibango | well, that means sheepdog might be much faster :) | 10:28 |
alekibango | ... i didnt benchmark it | 10:29 |
alekibango | jsut guessing | 10:29 |
alekibango | from the arch | 10:29 |
alekibango | one sheepdog cluster for a cluster of nova servers -> happy | 10:29 |
*** miclorb_ has quit IRC | 10:41 | |
zykes- | alekibango: how would you recon one should setup a 4+ node cluster ? | 10:48 |
jonwood | zykes-: Why specifically 4 nodes? | 10:49 |
alekibango | zykes-: he would add servers to his cluster :) | 10:49 |
alekibango | having around 20 or more he should really think about redesign | 10:49 |
jonwood | alekibango: Yes, but anything over a single node is the same process. | 10:49 |
alekibango | jonwood: well, no its not | 10:49 |
jonwood | It's just a matter of which machines run which services. | 10:49 |
alekibango | it will have different architectures | 10:50 |
alekibango | and different problems | 10:50 |
alekibango | different bottlenecks | 10:50 |
jonwood | Yes it is. You might end up having a MySQL cluster instead of a single database server, and failover, but the architecture is the same. | 10:50 |
alekibango | i dont think so. | 10:50 |
alekibango | first, he will have more clusters | 10:50 |
alekibango | not only one | 10:50 |
alekibango | that makes big difference itself | 10:51 |
alekibango | second - he will have different needs | 10:51 |
jonwood | Why would you have you more clusters? | 10:51 |
alekibango | becouse of need to make it more reliable and secure -- by having more providers in more countries | 10:51 |
alekibango | or whatever | 10:51 |
alekibango | jonwood: limits on network bandwidth | 10:52 |
alekibango | you will need to split it in some scale | 10:52 |
alekibango | to make it effective | 10:52 |
jonwood | That entirely depends on the situation it's being used in. | 10:52 |
alekibango | you will bring in more interfaces, different technologies | 10:52 |
alekibango | jonwood: yes, but for 4 servers, we can keep requirements low | 10:53 |
alekibango | and they can be installed in a way that gives some reliability | 10:53 |
zykes- | alekibango: can't one use a distributed database instead of mysql ? | 10:53 |
alekibango | zykes-: i would use mysql cluster of postgresql cluster from the start | 10:54 |
jonwood | zykes-: The main database needs to be SQL based at the moment. | 10:54 |
alekibango | mysql and postgresql can be clustered | 10:54 |
zykes- | something like drizzle could be used ro ? | 10:55 |
alekibango | zykes-: i am not sure about implementation | 10:55 |
jonwood | zykes-: Probably. | 10:55 |
alekibango | i am more inclined to believe in postgresql :) | 10:55 |
jonwood | zykes-: I'm not sure that anything supports splitting reads and writes yet though, so you might need to put mysql proxy in front of it. | 10:56 |
alekibango | the only requirement is to have sqlalchemy working with the db | 10:56 |
jonwood | Although it might be possible to just point your cloud controller at the master, and have everything else running off the slaves. | 10:56 |
jonwood | I'm not sure which components do writes to the database. | 10:56 |
zykes- | jonwood: mysql proxy in front of what ? | 10:57 |
jonwood | Your database cluster. | 10:57 |
alekibango | mysql can be clustered, having servers on each node | 10:58 |
alekibango | -- for 4 servers | 10:58 |
zykes- | alekibango: ndb cluster | 10:58 |
jonwood | Unless Drizzle is a proper distributed database now, I'm not sure. | 10:58 |
zykes- | you mean or ? | 10:58 |
alekibango | for 30+ you will need another arch | 10:58 |
alekibango | zykes-: yes | 10:58 |
alekibango | thats what i mean by 4 servers and 30+ servers, there are different needs, really :) | 10:58 |
zykes- | hehe | 10:59 |
alekibango | most small companies will use 4-6 servers | 10:59 |
alekibango | for private clouds | 10:59 |
alekibango | big ones will use 30+ schema | 10:59 |
*** dizz has joined #openstack | 10:59 | |
*** dizz is now known as dizz|away | 10:59 | |
zykes- | pr cluster then i guess ? | 11:00 |
alekibango | pr?? | 11:00 |
zykes- | per cluster sorry | 11:00 |
alekibango | well, there is a need for central DB, or am i mistaken? | 11:00 |
alekibango | at least for users if you will use it | 11:01 |
alekibango | and having 500+ servers you will have another needs | 11:01 |
alekibango | but you should have enough resources to figure that out | 11:01 |
alekibango | :) | 11:01 |
jonwood | I still don't get where you're getting that information from. As far as I'm aware no one but NASA *has* a 500+ server nova cloud right now. | 11:02 |
jonwood | So any declarations that you'll need one setup or another are just guesses. | 11:02 |
alekibango | :) yes they are :) | 11:02 |
alekibango | but noone oposes -> that means maybe i am right | 11:03 |
alekibango | :) | 11:03 |
jonwood | In that case, start simple, see where problems occur, fix them, and repeat. | 11:03 |
alekibango | one of my fav. ways to find out truth is to claim something and wait for reaction :) | 11:03 |
alekibango | jonwood: yes, thats why we start with 4 servers | 11:03 |
jonwood | Starting out with a multi-server database cluster because it *might* be where you see problems is just throwing away money. | 11:03 |
zykes- | how should one cluster pgsql then ? | 11:03 |
alekibango | zykes-: there are more ways now. pg 9.x is out | 11:03 |
alekibango | and i feel i am not the authority to decide this :) as i didnt test it irl | 11:04 |
zykes- | so for a 10+ node setup you should test clustering ? | 11:05 |
zykes- | of sql resources | 11:05 |
jonwood | zykes-: Personally I doubt you'll need it, except for failover if your primary database dies. | 11:05 |
alekibango | jonwood: i dont say there are big differences... but there are some :) | 11:06 |
alekibango | zykes-: for 10 i would use one cluster mysql -- if network will be ok | 11:06 |
alekibango | s/mysql// from last line | 11:06 |
zykes- | hacluster then ? | 11:07 |
alekibango | http://www.postgresql.org/docs/current/interactive/different-replication-solutions.html | 11:08 |
jonwood | That's probably what I'd do. master/master replication between a pair of MySQL servers with failover. | 11:08 |
jonwood | And then start adding slaves if you see performance problems. | 11:08 |
alekibango | pair, failover - that sounds dangerous to me :) | 11:09 |
jonwood | Yes. Almost as dangerous as running without any way of recovering from failure of a core part of your cluster. | 11:10 |
alekibango | i would use all 4 servers in that 4 server setup | 11:10 |
alekibango | if possible, in synchronnous way -- but having fast reads | 11:10 |
zykes- | alekibango: for a sql cluster ? | 11:11 |
alekibango | yes | 11:11 |
alekibango | but i am not sure how, as i never did that. i always used single server installs for now :) | 11:12 |
*** dizz|away has quit IRC | 11:22 | |
*** dendro-afk is now known as dendrobates | 11:29 | |
*** ArdRigh has quit IRC | 11:31 | |
*** ctennis has quit IRC | 11:33 | |
*** ctennis has joined #openstack | 11:53 | |
*** ctennis has joined #openstack | 11:53 | |
*** stewart has quit IRC | 11:58 | |
*** coolhandluke has joined #openstack | 12:07 | |
jaypipes | *yawn* | 12:11 |
*** hazmat has joined #openstack | 12:14 | |
dendrobates | jaypipes: I agree | 12:14 |
*** coolhandluke has quit IRC | 12:14 | |
jaypipes | dendrobates: :) | 12:15 |
jonwood | thirded | 12:18 |
* jonwood goes back to invoicing | 12:18 | |
*** ratasxy has joined #openstack | 12:23 | |
*** westmaas has joined #openstack | 12:23 | |
ratasxy | hello, what is the date for the final release of openstck | 12:24 |
jaypipes | dendrobates: ^^ | 12:24 |
*** gundlach has joined #openstack | 12:27 | |
*** stewart has joined #openstack | 12:28 | |
*** cloudmeat has joined #openstack | 12:30 | |
*** cloudmeat1 has quit IRC | 12:33 | |
*** omidhdl has left #openstack | 12:36 | |
alekibango | ratasxy: 21st | 12:39 |
alekibango | but that will be initial, not final | 12:39 |
alekibango | final release will be when openstack will die | 12:40 |
dendrobates | ha | 12:40 |
*** kashyapc has joined #openstack | 12:40 | |
alekibango | dendrobates: i hope i am right :) | 12:41 |
*** omidhdl1 has joined #openstack | 12:41 | |
*** dizz has joined #openstack | 12:43 | |
*** dizz is now known as dizz|away | 12:43 | |
zykes- | alekibango: what kind of cluster ? | 12:44 |
zykes- | for pgsql | 12:45 |
*** ratasxy has quit IRC | 12:45 | |
zykes- | cause i guess if you want performance you need to set some nodes read only ? | 12:45 |
*** al-maisan has joined #openstack | 12:46 | |
*** befreax has joined #openstack | 12:51 | |
*** befreax has left #openstack | 12:51 | |
*** befreax has joined #openstack | 12:54 | |
*** jpdion has joined #openstack | 12:57 | |
*** aliguori has joined #openstack | 13:08 | |
*** DubLo7 has joined #openstack | 13:08 | |
jaypipes | grrr, I think I'm coming down with a cold... :( | 13:19 |
jaypipes | haven't had one in over a year... | 13:19 |
*** schisamo has joined #openstack | 13:25 | |
alekibango | zykes-: imho nova will not load the DB that much. | 13:28 |
alekibango | i would be rather concerned about consistency and availability | 13:29 |
*** DubLo7 has quit IRC | 13:30 | |
crazed | anyone have good info on using multiple compute nodes? | 13:31 |
crazed | i'm on ubuntu 10.10 so things are in teh repos | 13:31 |
alekibango | crazed: its teh topic of the day | 13:31 |
crazed | oh | 13:31 |
alekibango | crazed: http://etherpad.openstack.org/NovaMultinodeInstall | 13:31 |
alekibango | yesterday we started documenting it | 13:32 |
crazed | how perfect | 13:32 |
crazed | i've got some test servers that we were going to use for cloudstack | 13:32 |
crazed | but the networking for it is pretty complicated | 13:33 |
alekibango | crazed: yes, help us write those docs | 13:33 |
alekibango | and put there your thoughts/questions | 13:33 |
*** krish has quit IRC | 13:33 | |
crazed | if i follow the steps for doing it on one node first, is that going to cause problems when trying to add the others | 13:34 |
alekibango | crazed: yes you will need database | 13:35 |
alekibango | and network setup | 13:35 |
alekibango | etc, see the doc | 13:35 |
crazed | slowly reading through it | 13:35 |
alekibango | there are 2 docs right now and some questions around | 13:37 |
*** omidhdl1 has left #openstack | 13:38 | |
*** befreax has left #openstack | 13:38 | |
*** jdarcy has joined #openstack | 13:41 | |
crazed | ah yeah the bottom part seems closer to what i'll be doing | 13:42 |
crazed | cuz they're using the PPA | 13:42 |
alekibango | crazed: first release is in 2 days | 13:43 |
alekibango | documentation is behind a bit, please help :) | 13:43 |
crazed | i will help as soon as i get into the installation bits | 13:43 |
crazed | openstack does vlan tagging? | 13:44 |
openstack | crazed: Error: "does" is not a valid command. | 13:44 |
*** dendrobates is now known as dendro-afk | 13:45 | |
*** pvo has joined #openstack | 13:46 | |
*** ChanServ sets mode: +v pvo | 13:46 | |
alekibango | openstack: rename yourself | 13:47 |
openstack | alekibango: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified. | 13:47 |
*** jpdion has quit IRC | 13:47 | |
alekibango | crazed: it can use some kind of vlans, not sure about details, see source :) | 13:48 |
pvo | alekibango: openstack bot getting snarky? | 13:48 |
crazed | ideally should be able to create separate "solutions" where each solution would be in a private vlan separated from the others | 13:50 |
crazed | i've got a layer 3 switch that supports the vlan tagging, but i'll start with something basic for now | 13:51 |
*** mdomsch has joined #openstack | 13:51 | |
alekibango | crazed: it can do it without the tagging switch | 13:53 |
alekibango | using virtual lans | 13:53 |
alekibango | (software) | 13:53 |
*** f4m8 is now known as f4m8_ | 13:54 | |
crazed | but then one of your nodes would have to be the gateway? right? | 13:54 |
alekibango | crazed: http://nova.openstack.org/architecture.html | 13:54 |
crazed | ATAoE hm | 13:55 |
crazed | seems pretty similar to eucalyptus's layout | 13:55 |
*** dendro-afk is now known as dendrobates | 13:55 | |
alekibango | well. eucalyptus might have problems with scaling :) | 13:56 |
alekibango | but its similar | 13:56 |
alekibango | cloud controllers (public api), cloud controllers, ... etc | 13:56 |
alekibango | eh cluster controllers | 13:56 |
dendrobates | pvo: who owns the openstack bot? | 13:56 |
*** BK_man has quit IRC | 13:57 | |
alekibango | you might know by the ip address? | 13:57 |
crazed | openstack: owner | 13:57 |
openstack | crazed: Error: "owner" is not a valid command. | 13:57 |
crazed | damn | 13:57 |
dendrobates | crazed: nova was originally written as a drop in replacement for eucalytpus | 13:57 |
crazed | this makes sense | 13:57 |
crazed | i've had limited experience with eucalyptus | 13:58 |
dendrobates | crazed: we have made most eucalytpus specific design flaws optional | 13:58 |
crazed | but the scalability issues caused us to drop it | 13:58 |
*** Cybodog has joined #openstack | 14:00 | |
*** mdomsch_ has joined #openstack | 14:01 | |
crazed | does the cloud controller need root access to the mysql server? | 14:01 |
crazed | or does it only use one database | 14:02 |
dendrobates | the sql server can be setup however you want. So it is up to your DBA's | 14:02 |
*** mdomsch has quit IRC | 14:02 | |
crazed | cool. | 14:02 |
crazed | i'm just looking at these docs on etherpad and they are using root.. which i'd never do | 14:02 |
alekibango | crazed: you know, people are lazy to configure it :) | 14:03 |
alekibango | and novascript from vishy should be run under root user :) | 14:03 |
crazed | can the network type be changed after initial configuration? | 14:05 |
*** Ryan_Lane has quit IRC | 14:05 | |
crazed | --network_manager=nova.network.manager.FlatManager, i'll probably want to move to vlans in the future | 14:05 |
dendrobates | vishy's script is for dev testing and not setting up a real environment | 14:05 |
dendrobates | crazed: not sure. probably not with out headache if you have running vm's | 14:06 |
crazed | ok, i'll probably just destroy any running VMs before changing something like that | 14:06 |
dendrobates | I think it wouldn't be a big deal if you don;t have any vm's | 14:06 |
*** ppetraki has joined #openstack | 14:11 | |
soren | creiht_: Kinda. | 14:12 |
soren | creiht_: Err... Never mind. | 14:12 |
soren | creiht_: I was scrolled *way* up, and the last thing I could see was you asking if I was around :) | 14:12 |
creiht_ | haha | 14:14 |
pvo | dendrobates: we do. ant specifically. | 14:14 |
dendrobates | pvo: can we change the name from openstack to something else? | 14:15 |
pvo | yea, was going to talk to ant about it. | 14:15 |
dendrobates | ta | 14:15 |
creiht_ | or at least be a little smarter about what it should respond to | 14:16 |
crazed | AttributeError: 'FlatManager' object has no attribute 'allocate_network' | 14:17 |
crazed | hm that's what i get when trying to create a new project | 14:17 |
*** creiht_ is now known as creiht | 14:18 | |
*** ChanServ sets mode: +v creiht | 14:18 | |
alekibango | openstack bot should respond only on lines starting openstack: (with :) | 14:20 |
openstack | alekibango: Error: "bot" is not a valid command. | 14:20 |
*** gaveen has quit IRC | 14:21 | |
pvo | alekibango: don't tease the bot. :p | 14:22 |
crazed | http://paste.openstack.org/show/62/ | 14:23 |
crazed | anyone run into that before? | 14:23 |
crazed | trying to create a new project | 14:23 |
*** openstack has joined #openstack | 14:29 | |
*** pvo_ has quit IRC | 14:29 | |
*** kevnfx has quit IRC | 14:31 | |
*** kashyapc has quit IRC | 14:32 | |
*** pvo_ has joined #openstack | 14:32 | |
*** pvo_ has joined #openstack | 14:32 | |
*** ChanServ sets mode: +v pvo_ | 14:32 | |
*** pvo has quit IRC | 14:32 | |
*** pvo_ is now known as pvo | 14:32 | |
dendrobates | crazed: if it still exists in trunk, please submit a bug | 14:32 |
crazed | --network_manager=nova.network.manager.FlatManager | 14:34 |
crazed | removing that line from the variousu configs fixes it | 14:34 |
zykes- | is the PPA the latest version ? | 14:34 |
*** HouseAway is now known as AimanA | 14:34 | |
dendrobates | zykes-: it is a daily build, so it is much newer. | 14:35 |
dendrobates | crazed: but why is it trying to allocate network with the flat model. We still need to see if it exists in trunk, so we can fix it before release. | 14:37 |
*** Ryan_Lane has joined #openstack | 14:38 | |
crazed | ah looks like the default is VlanManager | 14:39 |
*** sirp has joined #openstack | 14:40 | |
crazed | /usr/bin/python /usr/bin/nova-manage project create network | 14:40 |
crazed | in those etherpad docs, why does it have that | 14:40 |
crazed | doesn't seem like a valid command | 14:41 |
*** pharkmillups has joined #openstack | 14:41 | |
*** pharkmillups has joined #openstack | 14:42 | |
dendrobates | it should be, I think we are not testing the flat network model well enough. | 14:44 |
*** abecc has joined #openstack | 14:44 | |
dendrobates | any volunteers? | 14:44 |
zykes- | for what ? | 14:45 |
*** abecc has joined #openstack | 14:45 | |
dendrobates | testing flat networking | 14:45 |
crazed | blah i can't get nova-network or nova-scheduler to start | 14:45 |
spy | dendrobates: bot should be a little more quiet now | 14:46 |
crazed | how close is the PPA to ubuntu's archive | 14:46 |
dendrobates | crazed: very different, much newer. | 14:46 |
crazed | should i try that instead | 14:47 |
dendrobates | crazed: people have not been having many problems with the ubuntu packages. which instructions are you following? | 14:48 |
crazed | 0.9.1~bzr331-0ubuntu2 | 14:48 |
crazed | http://etherpad.openstack.org/NovaMultinodeInstall | 14:48 |
crazed | under the pinkish color | 14:48 |
crazed | "Setup the cloud controller first" | 14:48 |
dendrobates | crazed: are you installing on multiple systems? | 14:49 |
dendrobates | openstack are you quieter now? | 14:50 |
crazed | dendrobates: attempting to, but starting with just the cloud controller | 14:50 |
*** RexMundi has joined #openstack | 14:50 | |
dendrobates | openstack: are you quieter now? | 14:51 |
*** kashyapc has joined #openstack | 14:51 | |
dendrobates | openstack help | 14:51 |
dendrobates | it's pretty quiet | 14:51 |
crazed | nova.exception.NotFound: project network data has not been set | 14:52 |
dendrobates | this feels like configuration. I'll try it myself when I get a chance | 14:54 |
Ryan_Lane | I made an opends/opendj/sun directory server version of the nova ldap schema. anywhere in particular I should be sending things like this? | 14:54 |
*** mdomsch_ has quit IRC | 14:54 | |
crazed | those instructions also seem to leave out rabbitmq and redis | 14:55 |
crazed | i'm checking out http://wiki.openstack.org/NovaInstall now | 14:55 |
creiht | Ryan_Lane: nice! | 14:55 |
crazed | they didn't change any of the defaults | 14:55 |
* creiht is a fan of the sun directory server | 14:56 | |
dendrobates | redis is only needed for fake ldap, if you use a real ldap server you don't need it | 14:56 |
*** gundlach has joined #openstack | 14:56 | |
Ryan_Lane | creiht: I also plan on sending in some patches for the ldap driver | 14:56 |
crazed | one of those two made nova-network start properly | 14:56 |
Ryan_Lane | creiht: I like it too, but I'm really starting to favor opendj (the fork of opends) | 14:57 |
creiht | Ryan_Lane: doesn't that mean that it would work with the fedora directory server as well? Or have those projects divered now? | 14:57 |
creiht | (it has been a while for me) | 14:57 |
Ryan_Lane | I hate openldap with a passion | 14:57 |
Ryan_Lane | well | 14:57 |
Ryan_Lane | ... | 14:57 |
creiht | diverged | 14:57 |
crazed | it was rabbitmq | 14:57 |
crazed | i love openldap | 14:57 |
crazed | at least 2.4 and up | 14:57 |
Ryan_Lane | fedora directory server is based on iplanet | 14:57 |
crazed | openldap + apache directory studio | 14:57 |
Ryan_Lane | and so is sun directory server, somewhat | 14:57 |
creiht | I thought they had a common ancestor or something? | 14:57 |
creiht | ahh ok | 14:58 |
dendrobates | Ryan_Lane: commit it to the archive | 14:58 |
Ryan_Lane | the schemas should be compatible | 14:58 |
Ryan_Lane | I don't have commit access | 14:58 |
creiht | Ryan_Lane: make a merge proposal | 14:58 |
* Ryan_Lane doesn't know bzr or launchpad :) | 14:58 | |
dendrobates | Ryan_Lane: have you signed the CLA? | 14:58 |
Ryan_Lane | nope | 14:58 |
dendrobates | ah | 14:58 |
creiht | either way, they both support multi-master replication, which is nice | 14:58 |
dendrobates | well wait until after the release and I will help you. | 14:58 |
Ryan_Lane | though I'm likely to send in a number of patches, so it's probably a good idea | 14:59 |
creiht | the tools for sun directory server are very nice | 14:59 |
Ryan_Lane | opendj is like a java rewrite of sun directory server. it's pretty awesome | 14:59 |
creiht | all that said, I don't miss having to mess with LDAP :) | 14:59 |
creiht | ahh | 14:59 |
*** deshantm has quit IRC | 15:00 | |
Ryan_Lane | yeah, after release sounds good though | 15:00 |
Ryan_Lane | is there any way to have instances start a VNC session when they are launched? | 15:02 |
Ryan_Lane | err vnc server | 15:03 |
*** silassewell has joined #openstack | 15:08 | |
zykes- | http://socializedsoftware.com/wp-content/uploads/2010/07/WebMainScreen.jpg < what gui is that ? | 15:09 |
Ryan_Lane | the ruby one | 15:10 |
Ryan_Lane | lemme see if I can find it anywhere | 15:10 |
crazed | ValueError: IP('10.0.3.244/25') has invalid prefix length (25) | 15:10 |
crazed | hm aggrivating | 15:10 |
zykes- | crazed: isn't that invalid ? | 15:10 |
crazed | 25 is definitely a valid prefix length | 15:10 |
crazed | it was because of the network-size/range optoins | 15:12 |
Ryan_Lane | zykes-: https://launchpad.net/openstack-web-control-panel | 15:12 |
zykes- | same dude has made one for ruby and another for objectivec | 15:13 |
*** mdomsch has joined #openstack | 15:14 | |
vishy | 244/25 can't be right | 15:16 |
*** deshantm has joined #openstack | 15:21 | |
*** pvo has quit IRC | 15:23 | |
crazed | this is cool | 15:23 |
crazed | i've got a working cloud controller :) | 15:23 |
*** kevnfx has joined #openstack | 15:23 | |
*** pvo has joined #openstack | 15:25 | |
*** pvo has joined #openstack | 15:25 | |
*** ChanServ sets mode: +v pvo | 15:25 | |
vishy | crazed: nice | 15:25 |
crazed | couldn't get teh flat networking to work though | 15:27 |
*** coolhandluke has joined #openstack | 15:27 | |
*** DesiJat has quit IRC | 15:32 | |
*** pvo has quit IRC | 15:40 | |
*** calavera has quit IRC | 15:43 | |
crazed | lunch then time to get a node working | 15:43 |
crazed | by the way, are there any web interfaces to openstack yet? | 15:47 |
Ryan_Lane | crazed: https://launchpad.net/openstack-web-control-panel | 15:48 |
crazed | awesome. | 15:48 |
Ryan_Lane | it's not really done | 15:48 |
Ryan_Lane | by any means :) | 15:48 |
crazed | that's fine with me, this is entirely for testing purposes | 15:49 |
crazed | so beta software is good enough | 15:49 |
Ryan_Lane | I think it's more like early alpha ;) | 15:49 |
crazed | haha even better | 15:49 |
Ryan_Lane | yeah, what's the fun in using something finished :D | 15:49 |
*** sirp has quit IRC | 15:50 | |
Ryan_Lane | sounds like you are a glutton for punishment, like myself | 15:50 |
*** sirp has joined #openstack | 15:56 | |
*** kashyapc has quit IRC | 15:57 | |
*** pvo has joined #openstack | 16:01 | |
*** pvo has joined #openstack | 16:01 | |
*** ChanServ sets mode: +v pvo | 16:01 | |
*** pvo has quit IRC | 16:01 | |
crazed | how do you handle shared storage for vms and possibly moving instances around between nodes | 16:03 |
crazed | i know libvirt is capable of that, just wondering how you guys have implemented it | 16:03 |
crazed | i've got some nice NFS storage attached to each of my nodes already | 16:03 |
crazed | seems like nova-volume is in charge of this stuff | 16:04 |
*** fitzdsl has joined #openstack | 16:07 | |
creiht | mtaylor: so should we set up a lp:swift/1.1 that points at the 1.1 series? | 16:07 |
mtaylor | creiht: yeah. I think so. | 16:07 |
creiht | and is that kinda like a tag with bzr/lp? | 16:07 |
mtaylor | creiht: not really - we'll want to push a completely different branch - then on lp we'll make a 'release series' | 16:08 |
mtaylor | or a 'series' rather | 16:08 |
creiht | ok | 16:08 |
mtaylor | releases always come in the context of a series in launchpad world | 16:08 |
creiht | ok | 16:08 |
creiht | thanks | 16:08 |
creiht | I think we have our code ready... we are testing the package building right now | 16:09 |
creiht | mtaylor: so we could cut a tarball now | 16:09 |
creiht | how does one do that? | 16:09 |
mtaylor | python setup.py sdist | 16:09 |
mtaylor | and then you'll want to upload it to launchpad - we should make the 1.1 series first... one sec | 16:10 |
creiht | k | 16:10 |
mtaylor | ah - there already is a series. win | 16:12 |
mtaylor | so this will be 1.1.0 ? | 16:12 |
creiht | yup | 16:12 |
creiht | is there anything we should do to mark it as an RC? | 16:13 |
crazed | http://paste.openstack.org/show/63/ | 16:13 |
crazed | damn, first attempt to launch an instance failed | 16:13 |
mtaylor | not really - well... what do you mean by RC? - that this is an RC for 1.1.0? or that it's an RC for 1.1 ? | 16:13 |
mtaylor | bleh. that's not good | 16:14 |
*** pvo has joined #openstack | 16:14 | |
*** pvo has joined #openstack | 16:14 | |
*** ChanServ sets mode: +v pvo | 16:14 | |
*** kashyapc has joined #openstack | 16:14 | |
creiht | mtaylor: not sure | 16:14 |
creiht | :) | 16:15 |
creiht | I would say RC for 1.1.0 | 16:15 |
creiht | or is that not the right way to think about it? | 16:15 |
mtaylor | I don't know that there is a _right_ way | 16:15 |
creiht | heh | 16:15 |
mtaylor | but - I certainly would not release a thing called swift-1.1.0.tar.gz and then release another thing named the same thing in a few days | 16:16 |
creiht | so this is what we will be basing the release off of, but we may have to contribute bug fixes before the release | 16:16 |
creiht | yeah that was my thought | 16:16 |
creiht | lunch time... bbl | 16:16 |
mtaylor | you could edit setup.py and change the version to 1.1.0rc or 1.1.0pre or 1.0.99 or something | 16:16 |
creiht | k | 16:17 |
*** pvo has quit IRC | 16:19 | |
*** netbuzzme has joined #openstack | 16:23 | |
crazed | freaking network object hm | 16:27 |
crazed | damn it, these log messages aren't that helpful hm | 16:31 |
crazed | 2010-10-19 12:31:06-0400 [-] AttributeError: 'NoneType' object has no attribute 'access' | 16:32 |
dendrobates | jaypipes: are you up and around? | 16:32 |
jaypipes | dendrobates: I am. | 16:33 |
*** roamin9 has joined #openstack | 16:34 | |
*** dysinger has joined #openstack | 16:36 | |
roamin9 | hi ;) | 16:37 |
crazed | http://paste.openstack.org/show/64/ | 16:37 |
crazed | i got a step further, but hm | 16:37 |
eday | dendrobates: I've proposed https://blueprints.edge.launchpad.net/nova/+spec/distributed-data-store for the sprint, looks like it needs some sort of approval to be listed in the sprint for discussion. | 16:40 |
*** daleolds has joined #openstack | 16:41 | |
dendrobates | it does, I haven;t even started that process yet | 16:42 |
mtaylor | jaypipes: so I was just talking to Spamaps about that burndown report I showed you a few weeks ago | 16:42 |
crazed | InstanceLimitExceeded: Instance quota exceeded. You can only run 0 more instances of this type., crap where do i set this limit? | 16:43 |
eday | dendrobates: ahh, ok :) | 16:43 |
roamin9 | http://paste.openstack.org/show/65/ | 16:43 |
jaypipes | mtaylor: ya? | 16:44 |
roamin9 | when i try to use swift-auth-add-user, return that error, can anyone help me ? | 16:44 |
mtaylor | jaypipes: he's going to try to get the Work Item lists integrated directly into blueprints, which I think would be cool ... but I think he had a better definition of the when-to-use-blueprints vs. when-to-use-bugs (although I think the def I was tossing around still applies) | 16:45 |
mtaylor | jaypipes: his def is "blueprints are planned, bugs are unplanned" | 16:45 |
mtaylor | jaypipes: and then they break the blueprints down into work items by using formatted text in the whiteboard area and the burndown report | 16:46 |
roamin9 | this is the first time when i used swift, could you give me some tips ? | 16:46 |
jaypipes | mtaylor: that would be great :) I'm already using the Work Items style in my whiteboards... | 16:46 |
mtaylor | jaypipes: excellent | 16:47 |
*** silassewell has quit IRC | 16:47 | |
roamin9 | I did not care, just need some tips | 16:48 |
dendrobates | mtaylor jaypipes: pitti has already done something like that: http://people.canonical.com/~pitti/workitems/maverick/canonical-server-ubuntu-10.10.html | 16:49 |
dendrobates | we can talk to him at UDS about using his code | 16:49 |
jaypipes | dendrobates: yes, have seen that | 16:50 |
dendrobates | is the thing you are talking about better/different? | 16:50 |
jaypipes | creiht: see roamin9 ^^ | 16:50 |
mtaylor | dendrobates: yeah, that's actually what we're talking about | 16:50 |
mtaylor | dendrobates: we're just talking about integrating/using it :) | 16:50 |
dendrobates | cool, I agree. I was already planning on it | 16:51 |
mtaylor | dendrobates: excellent. | 16:51 |
*** coolhandluke has quit IRC | 16:51 | |
roamin9 | i just need some tips. thank u | 16:51 |
roamin9 | I'm new, just give me some tips. ok? | 16:53 |
*** matclayton has left #openstack | 16:55 | |
jaypipes | gholt, redbo: any ideas for roamin9? | 16:57 |
alekibango | roamin9: most people here are new.. developers who are not have hard time fixing bugs | 16:58 |
alekibango | (as release is on 21st) | 16:58 |
mtaylor | dendrobates: any ideas on what we should do with swift 1.1 RC? | 16:59 |
roamin9 | i know, thank you | 16:59 |
alekibango | roamin9: some service is not running imho | 16:59 |
mtaylor | dendrobates: my personal impulse is to just release 1.1.0 as is and indicate in release notes that it's RC ... and then releaes 1.1.1 whenever | 16:59 |
alekibango | or you are not configured well to connect to right one | 16:59 |
jaypipes | eday: added some notes to http://etherpad.openstack.org/DistributedDataStore. | 16:59 |
mtaylor | dendrobates: but I'm not sure if that's the "right" thing in the context of this project | 16:59 |
*** joearnold has joined #openstack | 17:03 | |
*** rlucio has joined #openstack | 17:03 | |
eday | jaypipes: in response to your comments, I see the API server touching the DB, expecting to be able to see all instances inside of it. The workers also pull/update from the same central data store, no? Did I misunderstand your comments or am I missing something? :) | 17:06 |
crazed | do you *need* ldap or redis to properly run instances? | 17:07 |
crazed | i was able to successfully get kernel, ramdisk, and machine instances up | 17:07 |
*** ibarrera has quit IRC | 17:07 | |
crazed | root@n1:~# euca-run-instances ami-et09ff0g --kernel ami-9v62mwx4 --ramdisk ami-l4n8vdny -k mykey | 17:07 |
crazed | NotAuthorized: None | 17:07 |
crazed | and i don't know where that's coming from, no mention of things in the log messages | 17:08 |
jonwood | eday, jaypipes: Have you seen mcollective? | 17:09 |
jonwood | It could be a good fit for distributing state to each node. | 17:09 |
alekibango | jonwood: i plan using ti later :) | 17:09 |
alekibango | jonwood: no, not for distributing state | 17:10 |
*** daleolds has quit IRC | 17:10 | |
alekibango | btw nova uses AMQP, as mcollective does | 17:10 |
eday | jonwood: I've read about it, but not too deeply | 17:10 |
gholt | roamin9: ECONNREFUSED indicates the auth server probably isn't running. When you do a 'ps axf | grep auth-server' does it find anything? | 17:11 |
alekibango | jonwood: i would love to have python alternative :)) | 17:11 |
dendrobates | crazed: did the instance get created anyway? | 17:11 |
jonwood | eday: It's main feature is the ability to aggregate data from multiple nodes. | 17:11 |
jonwood | alekibango: I'm sure you're quite welcome to write one. | 17:11 |
alekibango | jonwood: it shoud be simlpe, if you have inexpensive array of redundant engeneers | 17:12 |
alekibango | jonwood: i would hate having to learn ruby only to use it with mcollective :) | 17:12 |
jonwood | Ok. Don't use mcollective then. | 17:13 |
alekibango | i might be forced to. hundreds of machines are hard to manage ;) | 17:14 |
jaypipes | jonwood: no, haven't... I'll check it out. | 17:15 |
crazed | dendrobates: no | 17:15 |
crazed | nothing ever goes into the compute logs | 17:15 |
roamin9 | gholt: hi, when i run 'ps axf | grep auth-server' , there is no pid | 17:16 |
alekibango | jaypipes: http://marionette-collective.org/ | 17:16 |
gholt | roamin9: Okay, when you run 'swift-auth-server /etc/swift/auth-server.conf' is there any output? | 17:17 |
alekibango | jaypipes: btw i also added my comment to the article containing allegations of hijack...do you agree with me? :) | 17:19 |
roamin9 | gholt: yes, there is some error | 17:20 |
*** maplebed has joined #openstack | 17:21 | |
rlucio | crazed: actually i just noticed the same thing yesterday, the compute and volume logs don't seem to have any content even with --verbose | 17:21 |
roamin9 | gholt: [pipeline:main] section in /etc/swift/auth-server.conf is missing a 'pipeline' setting | 17:22 |
crazed | they did have logs when i was getting errors regarding "network" objects | 17:22 |
crazed | now it's like nothing is happening | 17:23 |
vishy | crazed: nova-manage project quota | 17:23 |
crazed | instances is set to 10 | 17:23 |
vishy | yeah you need to up instances and cores most likely | 17:24 |
crazed | cores is 20 | 17:24 |
vishy | nova-manage project quota instances 1000 | 17:24 |
vishy | er project quota <project> instances 1000 | 17:24 |
vishy | that is | 17:24 |
vishy | and same for cores | 17:24 |
gholt | roamin9: Cool, fix that conf file up and you that should be fixed. I assume you're doing SAIO? If so, a sample conf is in the instructions at http://swift.openstack.org/development_saio.html | 17:25 |
crazed | i upped them both to 1000, floating ips and volumes are still at 10 | 17:25 |
crazed | still getting NotAuthorized: None | 17:25 |
roamin9 | :gholt ^^ it's my error, i Misspelled pipeline. thank u very much | 17:27 |
vishy | crazed: sorry that is not for not authorized, did you create your user as an admin? | 17:27 |
vishy | that was a response to an eariler question you posted | 17:27 |
crazed | yeah.. i tried to fix that and now i'm at this | 17:27 |
crazed | but yes my user is admin | 17:27 |
crazed | wait | 17:28 |
crazed | | 2010-10-19 16:51:05 | NULL | NULL | 0 | crazed | NULL | admin | c3d12183-ad5e-41fd-b369-a6b77c5510f7 | 0 | | 17:28 |
vishy | nupe he isn't | 17:28 |
crazed | last colm is is_admin | 17:28 |
vishy | woah last column is secret i think | 17:28 |
crazed | there we go | 17:28 |
vishy | that doesn't look right | 17:28 |
crazed | | created_at | updated_at | deleted_at | deleted | id | name | access_key | secret_key | is_admin | | 17:28 |
crazed | +---------------------+------------+------------+---------+--------+------+------------+--------------------------------------+----------+ | 17:29 |
crazed | | 2010-10-19 16:51:05 | NULL | NULL | 0 | crazed | NULL | admin | c3d12183-ad5e-41fd-b369-a6b77c5510f7 | 1 | | 17:29 |
vishy | ah | 17:29 |
vishy | your access key is admin | 17:29 |
crazed | woo | 17:29 |
crazed | we have lift off | 17:29 |
vishy | nice | 17:29 |
vishy | :) | 17:29 |
crazed | no idea if the networking is right though | 17:29 |
crazed | it's runningn but yeah the ip is definitely wrong | 17:31 |
crazed | it's using an ip that is assigned to one of my compute nodes | 17:31 |
vishy | hehe | 17:31 |
*** burris has quit IRC | 17:31 | |
vishy | are you using flat networking? | 17:31 |
crazed | vlans | 17:31 |
vishy | oh really? | 17:32 |
crazed | yeah, flat networking wouldn't work for me | 17:32 |
vishy | when you created networks with nova-manage, you must have used a range that was in use? | 17:32 |
vishy | the networks are for instances, not for infrastructure | 17:32 |
crazed | i didn't use nova-manage to create networks | 17:33 |
crazed | this is the ubuntu archive version and i don't see nova-manage networks as an option | 17:33 |
vishy | oh? | 17:33 |
crazed | but hold on i have a conf call.. gr | 17:33 |
vishy | crazed: ah old version creates them from flags, so you need to set private_range flag (or possibly fixed_range, i'm not sure how long ago the flag was renamed) | 17:34 |
Ryan_Lane | yeah. the archives version, at least for lucid, don't have that option in nova-manage | 17:34 |
Ryan_Lane | you also can't terminate instances in the archive version :) | 17:34 |
rlucio | vishy: or flat_networking_ips | 17:36 |
rlucio | iirc | 17:36 |
vishy | rlucio: yeah, although he isn't using flat | 17:36 |
vishy | Ryan_Lane: we need a new build :) | 17:36 |
Ryan_Lane | would be nice :) | 17:37 |
vishy | soren: are you here? i have a bug in our version and I want to verify it with you before i post it and maybe discuss the best fix. | 17:37 |
Ryan_Lane | easier to test multinode with the packages | 17:38 |
alekibango | vishy: nova-manage is not documented at all, who would be right one to write some manual? | 17:38 |
vishy | we're currently doing our multinode via custom packages and puppet | 17:38 |
Ryan_Lane | that's what I'm hoping to do | 17:39 |
Ryan_Lane | minus the custom packages | 17:39 |
vishy | i'd like to use chef though, but we can't use the opscode platform | 17:39 |
alekibango | vishy: could you please go public with your multinode puppet and packages? :) | 17:39 |
vishy | alekibango: at the very least we'll make some blog posts | 17:39 |
alekibango | vishy: great | 17:39 |
alekibango | and nova-manage manual? | 17:40 |
alekibango | :) | 17:40 |
vishy | sure | 17:40 |
vishy | although virtually all of the commands have docstrings | 17:40 |
vishy | all the ones i wrote do anyway | 17:40 |
vishy | :) | 17:40 |
alekibango | :) | 17:40 |
vishy | and the docstrings display when you enter command without options | 17:40 |
vishy | :) | 17:40 |
*** irahgel has left #openstack | 17:41 | |
*** Grizzletooth has joined #openstack | 17:42 | |
*** Ryan_Lane is now known as Ryan_Lane|food | 17:45 | |
rlucio | alekibango: i thought someone yesterday was saying they were making chef recipes out of the multinode instructions we posted | 17:46 |
jonwood | rlucio: That's me. | 17:46 |
rlucio | ah right :) | 17:47 |
jonwood | It's in progress, but paying work comes first ;) | 17:47 |
*** gzmask has joined #openstack | 17:48 | |
*** johnbergoon has joined #openstack | 17:50 | |
gzmask | is this project exist to make image migration between cloud providers possible? | 17:51 |
alekibango | gzmask: not yet | 17:51 |
alekibango | well, depending on what you mean | 17:51 |
gzmask | somewhere down the road? | 17:51 |
alekibango | you can use something like deltacloud or libcloud later | 17:51 |
alekibango | when it will have nova support | 17:52 |
jdarcy | It makes setting up your own cloud provider easier. Moving work between them is kind of a different problem. | 17:52 |
alekibango | to do that from the point of user | 17:52 |
jdarcy | alekibango: I expect we'll be working on that Very Soon. | 17:52 |
alekibango | balancing resources between providers is not there :) | 17:53 |
alekibango | but might make sense | 17:53 |
gzmask | see... what's the different between this and Eucalyptus? | 17:53 |
jdarcy | Scheduling work across different providers is really pretty challenging. | 17:53 |
*** grizzlet1oth has joined #openstack | 17:54 | |
gzmask | I know that this is done with python and Eucalyptus is done with java | 17:54 |
alekibango | jaypipes: but might be interesting project | 17:54 |
jdarcy | You have to account for differences not just in the costs, but in how the costs are even expressed (e.g. instances defined by CPU vs. instances defined by memory, with different performance balances). | 17:54 |
*** silassewell has joined #openstack | 17:54 | |
alekibango | yes, there would be need to harmonize accounting :) | 17:55 |
jdarcy | Then the numbers all change on you, you have to reason about data locality, different people/groups having different budgets or rights/credentials on different providers, etc. | 17:55 |
zykes- | jdarcy: wondering, what setup do you recommend for hosts with SAN ? | 17:55 |
alekibango | jdarcy: still, this is done in transportation and logistics :) | 17:55 |
*** BK_man has joined #openstack | 17:56 | |
alekibango | jdarcy: how do you store nova images? | 17:56 |
jdarcy | zykes-: Um, I generally don't recommend hosts with SAN. What are you actually trying to do? | 17:56 |
*** Grizzletooth has quit IRC | 17:56 | |
zykes- | jdarcy: sorry the cluster nodes | 17:56 |
zykes- | have SAN | 17:56 |
alekibango | jaypipes: we talked about HA iamge storage for nova | 17:56 |
alekibango | exploring the space | 17:57 |
*** rnirmal has joined #openstack | 17:57 | |
jdarcy | alekibango: Haven't decided yet how to store Nova images. At one level, it's Just Another Back End for iwhd. It just needs to talk to whatever get/put API exists for manipulating images "from outside" or have an agent "inside" if that's not possible. | 17:58 |
alekibango | iwhd? | 17:58 |
jdarcy | alekibango: Never really looked at Nova myself, so I don't know yet what that'll look like. | 17:59 |
jdarcy | alekibango: iwhd = Image WareHouse Daemon. The storage component of Deltacloud:NG. | 17:59 |
alekibango | afaik nova stores disk images for kvm as files in directory | 17:59 |
alekibango | for the living instances | 17:59 |
jdarcy | Then as long as we can get to that directory (e.g. if it's mountable via NFS/GlusterFS/etc.) then we can probably just use the filesystem back end. | 18:00 |
alekibango | and will use glance for cold image storage and registry | 18:00 |
alekibango | jdarcy: so, pnfs, glusterfs, ceph, or RAID10 only ?? | 18:00 |
jdarcy | Right, for Glance we'll just use that API. We could also *provide* that API, but that's a whole different discussion. ;) | 18:00 |
*** westmaas has quit IRC | 18:01 | |
rlucio | alekibango: yes by default it running instances are files in /var/lib/nova/instances/instance-<id> but nova-volume can be setup to use AOE instead | 18:01 |
alekibango | rlucio: i need to read nova-volume :) | 18:01 |
alekibango | what nova-volume does? | 18:01 |
alekibango | (what else) | 18:02 |
rlucio | oh | 18:02 |
*** krish has joined #openstack | 18:02 | |
rlucio | just that, it allocates local filespace for vms or AOE space and manages them | 18:02 |
* alekibango is reading src. its short | 18:02 | |
*** grizzlet1oth is now known as grizzletooth | 18:03 | |
*** kevnfx has quit IRC | 18:05 | |
*** grizzletooth has quit IRC | 18:06 | |
*** grizzletooth has joined #openstack | 18:06 | |
rlucio | oh i take back what i said :) looks like nova-compute allocates disk files, and the volume manager is like EBS volumes via aoe | 18:06 |
uvirtbot | New bug: #663410 in nova "Ips aren't set to leased properly if nova-network node is down or restarted" [Undecided,New] https://launchpad.net/bugs/663410 | 18:07 |
rlucio | foot --> mouth | 18:07 |
*** burris has joined #openstack | 18:07 | |
*** grizzletooth has quit IRC | 18:07 | |
*** grizzletooth has joined #openstack | 18:07 | |
vishy | rlucio: that is correct | 18:08 |
vishy | not recommended though | 18:08 |
rlucio | vishy: heh, thanks for confirming ... what is not recommended? | 18:08 |
vishy | aoe modules have been causing kernel panics | 18:08 |
vishy | and destroying raid configurations on reboot | 18:08 |
vishy | :( | 18:08 |
rlucio | oh yea, i've heard some .. ah.. not great things about aoe scaling | 18:09 |
vishy | ymmv | 18:09 |
vishy | not to mention it is incredibly slow | 18:09 |
vishy | :( :( | 18:09 |
vishy | going to propose iscsi post-austin | 18:09 |
vishy | it may be ok in maverick | 18:10 |
vishy | we are using lucid and it has caused way too many headaches | 18:10 |
rlucio | that seems to be the standard way of doing it, afaik | 18:10 |
creiht | mtaylor: ok back to cutting a tarfile | 18:10 |
rlucio | iscsi i mean | 18:10 |
mtaylor | creiht: yes. so I asked dendrobates a question but he ignored me :) | 18:11 |
creiht | dendrobates: do we have a preferred methond for cutting RC packages? | 18:11 |
vishy | rlucio: yes iscsi works much better | 18:11 |
creiht | hehe | 18:11 |
dendrobates | mtaylor: sorry. | 18:11 |
dendrobates | mtaylor: it is on my list of things to figure out today | 18:11 |
mtaylor | creiht: my vote is to just release 1.1.0 and mention that it's an RC and then when you have changes release a 1.1.1 and at some point say "hey, 1.1.4 is good" | 18:11 |
mtaylor | dendrobates: cool | 18:11 |
creiht | heh | 18:12 |
creiht | mtaylor: that sounds lame :) | 18:12 |
dendrobates | I was thinking of tagging austin-rc and tarring from there. | 18:13 |
mtaylor | dendrobates: so my question is more on what to version the tarball | 18:13 |
creiht | how does tagging work with lp? | 18:13 |
dendrobates | yeah, I want to move away from version numbers and towards dates, for releases | 18:14 |
mtaylor | lp has nothing to do with tagging - it's all a bzr level thing | 18:14 |
mtaylor | dendrobates: that's fine with me, I prefer dates | 18:14 |
* creiht dislikes dates as release versions | 18:14 | |
creiht | :) | 18:14 |
dendrobates | creiht: why? | 18:14 |
creiht | it seems like most people who do go back to normal versioning | 18:15 |
dendrobates | I don;t like version numbers because they have built in connotations unrelated to your software. | 18:15 |
alekibango | dendrobates: +1 | 18:16 |
alekibango | that depends on development model | 18:16 |
dendrobates | What I want to do is avoid the version 1.0 controversy in Nova | 18:16 |
creiht | dendrobates: that is only true durring the adolescent stages of a new project | 18:16 |
vishy | mtaylor: any thought on hudson/tarmac stripping whitespace? Or should we just start yelling at developers and tell them to configure their dev env properly? | 18:16 |
dendrobates | creiht: which we are in. | 18:16 |
creiht | and which we will get out of :) | 18:16 |
mtaylor | vishy: oh, well, sorry. totally forgot. :) | 18:17 |
dendrobates | but you still have the x.0 issue where people treat any x.0 like it's a beta | 18:17 |
dendrobates | thanks to MS | 18:17 |
mtaylor | vishy: I _WOULD_ definitely start yelling at devs to configure their dev env properly | 18:18 |
creiht | and that is a bad thing? :) google seems to get by with everything being beta forever :) | 18:18 |
vishy | mtaylor: of course that is about as valuable as yelling at a brick wall :) | 18:19 |
arthurc | does someone test this workaround : http://etherpad.openstack.org/NovaMultinodeInstall ? | 18:20 |
mtaylor | vishy: hehe | 18:20 |
uvirtbot | New bug: #663421 in swift "The SAIO documentation numbering scheme is crazy" [Undecided,New] https://launchpad.net/bugs/663421 | 18:21 |
vishy | arthurc: workaround? | 18:22 |
vishy | alekibango: where do you live? | 18:22 |
arthurc | humm, how to | 18:23 |
arthurc | vishy: I think he lives in Europe, he has the same hour as me | 18:23 |
alekibango | vishy: cz (eu) | 18:23 |
vishy | alekibango: cool. The work on documentation is much appreciated, btw. | 18:24 |
arthurc | alekibango: I'm testing right now your "pad" | 18:24 |
alekibango | vishy: i am trying to learn it by helping, it works better than just by waiting :) | 18:25 |
vishy | alekibango: do you have a specific project in mind afterwards? Or you just like the technology? | 18:25 |
alekibango | my customer has few hundreds of servers and he wants to evolve to easier, manageable system... | 18:26 |
vishy | alekibango: gotcha | 18:27 |
*** roamin9 has quit IRC | 18:32 | |
*** jc_smith has joined #openstack | 18:33 | |
*** Kdecherf has quit IRC | 18:36 | |
alekibango | vishy: and i would also like to prepare some cloud of DESKTOPs :) | 18:37 |
*** Kdecherf has joined #openstack | 18:37 | |
alekibango | that might sound strange, but it will work :) | 18:38 |
crazed | vishy: thanks for the information | 18:45 |
uvirtbot | New bug: #663437 in nova "The root drive for an instance is always the size of the image instead of being resized" [Undecided,New] https://launchpad.net/bugs/663437 | 18:46 |
*** daleolds has joined #openstack | 18:48 | |
*** kashyapc has quit IRC | 18:49 | |
netbuzzme | hey Vishy | 18:49 |
netbuzzme | this is Ajey Gore from ThoughtWorks | 18:49 |
netbuzzme | we are trying to get Nova working on Centos | 18:50 |
netbuzzme | and we are still stuck with problems on network virtualization part | 18:50 |
alekibango | netbuzzme: is something unusual in logs? | 18:50 |
netbuzzme | do you guys suggest that we should run it on Centos or we should move to ubuntu | 18:50 |
alekibango | netbuzzme: nova devels are using ubuntu | 18:51 |
alekibango | i use debian rather :) | 18:51 |
alekibango | and source works for me | 18:52 |
alekibango | (when configured) | 18:52 |
netbuzzme | i am badly stuck at nova actually one python module libxml2 forpython2.6 on centos5.5 | 18:52 |
netbuzzme | we at ThoughtWorks use CentOS and Redhat mainly | 18:52 |
netbuzzme | and thats the reason we are trying with CentOS | 18:52 |
netbuzzme | so.. thats why | 18:52 |
alekibango | 2.6 is well needed right now | 18:52 |
vishy | i'm sure it will be possible, but it may require a lot of custom hacking | 18:53 |
netbuzzme | yeah | 18:53 |
*** rnirmal has quit IRC | 18:53 | |
vishy | libxml is barely used iirc | 18:53 |
netbuzzme | we had to install different version for python | 18:53 |
*** rnirmal has joined #openstack | 18:53 | |
netbuzzme | anyway... I will get back to you on this - let me re-run my code | 18:53 |
netbuzzme | do you know if someone doign rpm packaging for Openstakc | 18:54 |
netbuzzme | if not then I can put some efforts into that | 18:54 |
alekibango | netbuzzme: there were some | 18:54 |
jdarcy | Let me find that Bugzilla ticket. | 18:54 |
alekibango | fedora might be first :) as it has dependencies | 18:54 |
mtaylor | netbuzzme: ubuntu++ | 18:54 |
netbuzzme | mtaylor: lots of enterprises are not moved to ubuntu yet | 18:54 |
mtaylor | netbuzzme: I know ... makes me sad | 18:55 |
mtaylor | netbuzzme: in my other project, centos5 is my largest constant nightmare | 18:55 |
netbuzzme | mtaylor:ldap over tls did not work for us on ubuntu, we integrate with active directory | 18:55 |
jdarcy | Here's one for Swift. https://bugzilla.redhat.com/show_bug.cgi?id=617632 | 18:55 |
uvirtbot | bugzilla.redhat.com bug 617632 in Package Review "Review Request: openstack-swift - OpenStack Object Storage (swift)" [Medium,Closed: errata] | 18:55 |
mtaylor | netbuzzme: hrm. interesting. that's quite annoying :) | 18:55 |
jdarcy | Dunno about Nova, though. | 18:55 |
netbuzzme | mtaylor:yeah, and actually if we move to ubuntu, we need to change a lot of stuff | 18:55 |
netbuzzme | mtaylor:since I am head of global IT there, I can influence it here, but it won't work unless we get TLS binding over ldap working with AD and MSSFU on ubuntu, sadly it does not work right now | 18:56 |
netbuzzme | mtaylor:so we need to be with CentOS for now | 18:56 |
netbuzzme | vishy:I got your contact from Jonathan at Rackspace | 18:57 |
netbuzzme | vishy:thats why I thought I should ping you | 18:57 |
netbuzzme | vishy: anyway we will start effort on packaging Nova for CentOS | 18:58 |
vishy | vishy: awesome, unfortunately we haven't done anything with centos | 18:59 |
vishy | er | 18:59 |
alekibango | lol | 18:59 |
vishy | netbuzzme: ^^ | 18:59 |
vishy | :) | 18:59 |
vishy | i talk to myself sometimes :p | 18:59 |
vishy | netbuzzme: it would be awesome to have a working package though | 19:00 |
creiht | mtaylor: so if I'm just uploading a tarball, is there anything preventing me from making the rc change in setup.py locally and building the tarball and uploading? | 19:01 |
creiht | without checking in the rc change to setup.py | 19:01 |
creiht | mtaylor: ugh... I'm still not an admin for lp:swift :/ | 19:02 |
dendrobates | creiht: really, I thought I added you. | 19:02 |
dendrobates | let me check | 19:02 |
netbuzzme | vishy: okay | 19:03 |
dendrobates | creiht: what is it not letting you do? | 19:04 |
mtaylor | creiht: I'mnot either | 19:06 |
mtaylor | dendrobates, creiht: lp:swift is a branch owned by hudson | 19:06 |
creiht | dendrobates: I don't see links that I used to a while back | 19:06 |
creiht | https://launchpad.net/swift | 19:06 |
creiht | is what I'm referring to | 19:06 |
mtaylor | creiht: ah - that's different :) | 19:06 |
*** krish has quit IRC | 19:06 | |
creiht | I also didn't see myself in the OpenStack Administrators group (which is listed as the maintainer) | 19:07 |
dendrobates | so the project,I'm fixing that. I thought what I did last time fixed it. | 19:07 |
dendrobates | just a sec | 19:07 |
dendrobates | creiht: try it now | 19:09 |
creiht | dendrobates: ok that looks better :) | 19:10 |
creiht | The layers of obfuscation of groups within groups gets confusing | 19:11 |
creiht | :) | 19:11 |
zykes- | jdarcy: what i was wondering is that my company has a compellant san that we use on the compute nodes, can i use that for storage ? | 19:12 |
dendrobates | vishy: I am getting a traceback from nova-objectstore on trunk: http://paste.openstack.org/show/66/ | 19:12 |
dendrobates | vishy: this is new for me. before I dig in, have you seen this? | 19:13 |
*** Ryan_Lane|food is now known as Ryan_Lane | 19:14 | |
vishy | yup | 19:14 |
vishy | wrong version of boto | 19:14 |
vishy | you need 1.9b | 19:15 |
vishy | or that happens | 19:15 |
vishy | going to lunch, see you at the meeting | 19:15 |
*** vishy is now known as vishy-afk | 19:15 | |
zykes- | what's the ppa for nova ? | 19:16 |
*** nielsh has joined #openstack | 19:18 | |
*** westmaas has joined #openstack | 19:19 | |
dendrobates | hmm python-boto | 1.9b-1ubuntu3 | http://archive.ubuntu.com/ubuntu/ maverick/main amd64 Packages | 19:22 |
jdarcy | zykes-: I think that question is better answered by someone else here. | 19:22 |
*** nielsh has left #openstack | 19:22 | |
dendrobates | zykes-: https://launchpad.net/~nova-core/+archive/ppa | 19:25 |
crazed | damn all that work on openstack and i had to rip it down hahah | 19:25 |
crazed | oh well i like what i see a lot | 19:25 |
crazed | the configuration files are a bit.. messy but that's not a big deal | 19:25 |
* alekibango falls asleep, gn8 for now | 19:26 | |
creiht | ugh... I can't seem to figure out how to get my pgp key up to lp :/ | 19:27 |
dendrobates | https://edge.launchpad.net/~creiht/+editpgpkeys | 19:28 |
creiht | dendrobates: yeah but when I try to import, it says that it can't | 19:28 |
*** dabo has joined #openstack | 19:28 | |
jaypipes | alekibango: sorry, stepped away for a bit... | 19:29 |
dendrobates | creiht: to #launchpad and scream :) | 19:29 |
creiht | heh | 19:29 |
crazed | i uploaded my key awhile back | 19:29 |
crazed | but i have NO idea where it is | 19:30 |
crazed | probably lost forever | 19:30 |
jaypipes | jdarcy: "how to store Nova images"? Nova doesn't yet have its own image format, so no worries there ;) | 19:30 |
dendrobates | mtaylor: the ppa's have not rebuilt in 2 weeks. | 19:30 |
creiht | there we go | 19:31 |
*** joearnold has quit IRC | 19:31 | |
jdarcy | jaypipes: Good, I'll just implement a back end that write's to /dev/null. I'll call it the "web scale" back end. | 19:31 |
jaypipes | jdarcy: well played. | 19:32 |
*** al-maisan is now known as almaisan-away | 19:32 | |
* jaypipes cat fedora > /dev/null | 19:32 | |
creiht | /dev/null is webscale | 19:32 |
creiht | hehe | 19:32 |
jdarcy | But seriously, whenever you have a format/API I expect we'll add a back end for it. | 19:32 |
jaypipes | jdarcy: yup, will do of course! | 19:32 |
jaypipes | jdarcy: can I count on your input on the ML about the new image format? would be great to get... | 19:33 |
jdarcy | I'd love to throw Fedora in /dev/null. Just had two F14 VM installs fail today, apparently because it can't handle having <1GB. | 19:33 |
jaypipes | heh | 19:33 |
crazed | yeah you need more than a gig of memory to get the installer running | 19:34 |
jaypipes | eday: alrighty, let's get the rumble on in here about distributed data store! ;P | 19:34 |
jdarcy | jaypipes: Sure, I'll take a look. | 19:34 |
eday | jaypipes: hehe.. just wanted to know how one would configure it the way your comments describe in the etherpad :) | 19:35 |
arthurc | jdarcy: I've installed F14 alpha on kvm with 512MB... | 19:35 |
eday | jaypipes: because I don't think it's currently possible | 19:35 |
jaypipes | eday: actually, I've been looking at how swift does their sqlite replication and I think for our data sizes (local to nodes, of course) that might work... | 19:36 |
creiht | mtaylor: so how do I upload a tarball for the 1.1.0 rc? | 19:36 |
jaypipes | eday: and then for places where we *do* need aggregation or a very large dataset, we can use a "beefier" RDBMS. | 19:37 |
jaypipes | eday: like postgresql. | 19:37 |
*** dysinger has quit IRC | 19:37 | |
creiht | the only way I have found wants to upload for the 1.0 series | 19:37 |
jaypipes | eday: so, right now, whatever node runs the various daemons in nova/bin/ will create a set of data in sqlite that corresponds to what the daemon does in the grand scheme of things. | 19:38 |
eday | jaypipes: well, the API server needs access to those as well, and it will be the one creating them first | 19:39 |
mtaylor | creiht: go to the 1.1.0 milestone, and you'll see a link "create a release" | 19:39 |
creiht | ahh | 19:39 |
mtaylor | creiht: there you can release that milestone and then upload files to that release | 19:39 |
mtaylor | creiht: releases are always the result of milestones | 19:39 |
jaypipes | eday: and my point on the etherpad was that we currently have 1) no tests for the data distribution across multiple nodes, and b) no plan for cross-node-datastore IntegrityError raising through the ORM. | 19:39 |
creiht | hrm | 19:39 |
mtaylor | so - what really probably wants to happen here is that you probably want to make a 1.1.0-RC milestone, and then release that | 19:40 |
creiht | k | 19:40 |
Ryan_Lane | netbuzzme: regarding tls binding to AD: do you have other things that do this properly? I've never gotten TLS binding working with AD, for anything | 19:40 |
jaypipes | eday: so, while I like the etherpad discussion, it's missing some key details on *where* each *specific table* will reside. | 19:40 |
Ryan_Lane | netbuzzme: I'm not sure AD supports it | 19:40 |
eday | jaypipes: 100% agree on the testing, but I don't see how one would configure the current code to be distributed (our first couple paragraphs) | 19:41 |
Ryan_Lane | netbuzzme: ldaps does generally work, though | 19:41 |
eday | jaypipes: because each worker node does not have it's own data store. they each access it, but it needs to be shared with the API servers, so it's central | 19:41 |
creiht | mtaylor: hehe.. now the -rc milestone is after the 1.1.0 milestoner | 19:41 |
jaypipes | eday: in the default configuration, if someone runs bin/nova-api on one node, bin/nova-network on another, and bin/nova-compute on another, the data will be spread among all 3 nodes by default. | 19:41 |
mtaylor | creiht: hehe. well, that can probably change by adjusting the 1.1.0 target date | 19:42 |
eday | jaypipes: but nothing will work :) api server dumps all config into the db, and sends a message with a reference to the record in the DB | 19:42 |
jaypipes | eday: you and I know nothing will work. but the test suite doesn't ;) | 19:42 |
eday | jaypipes: if the worker can't access th same DB as the api, it can't do much | 19:42 |
creiht | there we go | 19:43 |
jaypipes | eday: if we implemented the sqlite replication that is in swift, each node would keep a copy of the same data... | 19:43 |
* jaypipes like creiht's sqlite replication... | 19:43 | |
eday | jaypipes: yep, agree on the testing, but that's another discussion | 19:43 |
jaypipes | eday: ok, shelve the testing part of the discussion... | 19:44 |
creiht | jaypipes: well redbo wrote most of that | 19:44 |
jaypipes | creiht: it's ok, just take credit for it. | 19:44 |
creiht | heh | 19:44 |
eday | jaypipes: well, sqlite replication may not be enough... because the API servers and scheduler workers need aggregates of all the data in a single DB, where each worker is just what hat node cares about | 19:44 |
jaypipes | eday: yes, that is all true. | 19:44 |
eday | jaypipes: so, SQLite replcation *may* be the answer, and that's something to consider during the summit discussion :) | 19:45 |
jaypipes | eday: which is why I said above we could use a "beefier" rdbms for that aggregation stuff. The problem with that, of course, is keeping the data in sync. Or, we just say "on conflicting data, use the local data store on the node" | 19:45 |
eday | jaypipes: but it sounds like your comments in the etherpad imply that's possible right now, and I don't understand how :) | 19:45 |
jaypipes | eday: I meant the following: "if you install the nova binaries on different machines and don't change the default (of having a local sqlite data store), nothing will work, as all the data will be distributed." | 19:46 |
eday | jaypipes: yup, agree. So my proposal is to figure out how to aggregate/sync that data basically. | 19:47 |
jaypipes | eday: so I suppose I said it weirdly... :) | 19:47 |
jaypipes | eday: ++ | 19:47 |
rlucio | probably distributed is the wrong word there :) | 19:47 |
jaypipes | eday: and what is your proposed solution? or are you just saying "let's get a solution"? | 19:47 |
jaypipes | rlucio: true ;) | 19:47 |
eday | jaypipes: I guess I had the assumption that "currently in a WORKING nova system... this is how it works." :) | 19:47 |
Ryan_Lane | is the worry with a database like mysql the lag replication time? | 19:48 |
jaypipes | eday: ah, let me correct the etherpad then | 19:48 |
Ryan_Lane | err replication lag time | 19:48 |
jaypipes | Ryan_Lane: you'll always have some sort of lag unless you use some sort of real-time database or something like MySQL Cluster. | 19:48 |
Ryan_Lane | right | 19:48 |
creiht | mtaylor: woot... tarball cut, can you update the package stuff? | 19:48 |
jaypipes | Ryan_Lane: regardless of whether you use a sync process or not... | 19:48 |
eday | jaypipes: I don't have a specific solution, that's something for discussion. I just wanted the problem stated with some ideas of how to solve, then we can hash ou details at summit. But the general solution is to have each worker keep it's own private store and somehow each one will push updates to api/schedular. This may be within the DB, this may be within the app for more context (via msg bus) | 19:49 |
Ryan_Lane | we solve that at wikimedia by using masters for reads for requests that *must* be up to date | 19:49 |
jaypipes | eday: ++ ok, I'll remove my thoughts on the etherpad then, as they were oddly stated (except for the testing issue mentions) | 19:50 |
eday | jaypipes: yeah, keep the testing for sure :) | 19:50 |
Ryan_Lane | question is, for most queries, does it matter if the data is completely up to date? | 19:50 |
jaypipes | Ryan_Lane: yup, but there's still *some* lag ;) | 19:50 |
Ryan_Lane | right. whether that matters or not is the question | 19:50 |
jaypipes | Ryan_Lane: low value data and high value data... treated differently lots of times, and that's fine by me! :) | 19:50 |
Ryan_Lane | the master will always be up to date | 19:51 |
Ryan_Lane | that's one benefit of using master/slave instead of master/master | 19:51 |
jaypipes | eday: updated. lemme know if that little paragraph makes more sense... | 19:52 |
mtaylor | creiht: yup | 19:52 |
jaypipes | Ryan_Lane: ya, agreed. | 19:52 |
jaypipes | Ryan_Lane: but then you're not looking at replicated data are you? ;) | 19:53 |
eday | jaypipes: makes sense | 19:53 |
jaypipes | eday: k | 19:53 |
Ryan_Lane | what data needs to be replicated | 19:53 |
jaypipes | eday: sorry for the confusion! :) | 19:53 |
Ryan_Lane | ? | 19:53 |
Ryan_Lane | images? | 19:53 |
jaypipes | Ryan_Lane: you said above that you worried about "replication lag in MySQL" and I'm saying if you're hitting the master, you're not hitting any replicated data...that's all. | 19:54 |
Ryan_Lane | that is handled by the filesystem, right? | 19:54 |
Ryan_Lane | ah | 19:54 |
Ryan_Lane | yeah | 19:54 |
creiht | mtaylor: also look over the release stuff to make sure I did it right (if you don't mind) | 19:54 |
jaypipes | :) | 19:54 |
Ryan_Lane | wait. yes. you are hitting replicated data | 19:54 |
eday | jaypipes: np :) was just hoping I didn't miss some big patch that made that possible | 19:54 |
Ryan_Lane | everyone writes to the master, and the master replicates the data to the slaves. you read from the master directly to ensure that you avoid lag time. | 19:55 |
jaypipes | Ryan_Lane: right, so you're not reading the replicated copy of the data. | 19:55 |
Ryan_Lane | queries that aren't time sensitive go to the slaves | 19:55 |
jaypipes | yup | 19:55 |
jaypipes | Ryan_Lane: do you use memcached as well in between for other data? | 19:56 |
Ryan_Lane | yeah | 19:56 |
Ryan_Lane | we fairly heavily use memcached | 19:56 |
zykes- | memcached for what ? | 19:56 |
Ryan_Lane | mediawiki object caching | 19:57 |
*** metoikos has joined #openstack | 19:57 | |
Ryan_Lane | user sessions, other things | 19:57 |
*** _morfeas has quit IRC | 19:57 | |
*** joearnold has joined #openstack | 19:58 | |
Ryan_Lane | anything that is expensive that is cheaper to cache than to recreate :) | 19:58 |
creiht | You guys need to think more distributed :) | 19:58 |
Ryan_Lane | :D | 19:58 |
Ryan_Lane | we had 114k req/s today at peak | 19:59 |
Ryan_Lane | we have like 350 servers give or take | 19:59 |
*** morfeas has joined #openstack | 19:59 | |
creiht | Ryan_Lane: how much of your traffic is read vs. write? | 20:02 |
*** joearnold has quit IRC | 20:02 | |
Ryan_Lane | nearly all is read | 20:02 |
*** burris has quit IRC | 20:03 | |
*** joearnold has joined #openstack | 20:05 | |
*** ctennis has quit IRC | 20:06 | |
*** burris has joined #openstack | 20:07 | |
*** morfeas has quit IRC | 20:08 | |
*** westmaas has quit IRC | 20:11 | |
*** jakedahn has joined #openstack | 20:12 | |
*** burris has quit IRC | 20:15 | |
*** arthurc has quit IRC | 20:16 | |
*** ctennis has joined #openstack | 20:19 | |
*** ctennis has joined #openstack | 20:19 | |
dendrobates | wow, I was confused. pip installed boto 2.0b1 in /usr/local/python on my system which does not work with nova | 20:22 |
*** pvo has joined #openstack | 20:26 | |
*** pvo has joined #openstack | 20:26 | |
*** ChanServ sets mode: +v pvo | 20:26 | |
*** dendrobates is now known as dendro-afk | 20:27 | |
vishy-afk | eday: catching up on scrollback, one possibility we've been discussing is keeping data central but just giving all of the workers read only access | 20:31 |
*** vishy-afk is now known as vishy | 20:31 | |
*** allsystemsarego has quit IRC | 20:31 | |
vishy | eday: and pushing updates from the workers through the queue, only having one place that actually writes to the db | 20:31 |
vishy | eday: this is more for security, not allowing a compromised worker compromise the entire system, but it could be very useful for scaling as well | 20:32 |
vishy | because the workers could just have read slaves of the db. | 20:32 |
*** johnpur has joined #openstack | 20:34 | |
*** ChanServ sets mode: +v johnpur | 20:34 | |
eday | vishy: security wise that would be ok, but I wonder about scalability... | 20:34 |
*** dendro-afk is now known as dendrobates | 20:35 | |
*** rnirmal has quit IRC | 20:36 | |
Ryan_Lane | from a scalability point, that would be a major plus | 20:37 |
*** silassewell has quit IRC | 20:37 | |
Ryan_Lane | you can load balance your reads | 20:37 |
Ryan_Lane | as long as no reads are time sensitive, that's a good idea | 20:37 |
eday | Ryan_Lane: well, I don't think reads are going to be the bottleneck here, at least from the workers. THe workers will mostly be doing writing (state changes, creation, ...) | 20:39 |
*** perestrelka has quit IRC | 20:39 | |
eday | Ryan_Lane: the API front end will be more read heavy though | 20:39 |
Ryan_Lane | vishy was recommending read-only access for the workers | 20:39 |
Ryan_Lane | ah. pushing through the queue | 20:40 |
Ryan_Lane | I knew I must have been missing something :) | 20:40 |
eday | Ryan_Lane: yeah, write via confirmed queue msg :) | 20:40 |
vishy | eday, Ryan_Lane: some sort of cluster agregation will probably still be necessary, one writer per cluster, perhaps? | 20:41 |
vishy | lots to discuss at the summit for sure | 20:41 |
Ryan_Lane | why the need for more than one master? | 20:41 |
eday | vishy: yeah.. I was thinking scheduler per cluster, and maybe the sched processes manage all the writes | 20:41 |
Ryan_Lane | redundancy across datacenters? | 20:41 |
eday | vishy: and then API servers would aggregate schedulers/clusters | 20:42 |
vishy | Ryan_Lane: our goal is to support 1 million host machines | 20:42 |
Ryan_Lane | ah. ok. I can see why then :) | 20:42 |
vishy | 1 million hosts sending through the queue is probably a bottleneck | 20:42 |
vishy | :) | 20:42 |
creiht | There is a reason there are no queues in swift :) | 20:43 |
eday | creiht: well, probably not a traditonal message queue, but many things are a queue of some form (ie, replication stream) :) | 20:44 |
* creiht was referencing a traditional message queue | 20:45 | |
Ryan_Lane | maybe break the hosts into sections, and have a queue per section? | 20:45 |
Ryan_Lane | x number of hosts per section, and therefor per queue? | 20:45 |
eday | creiht: my point was queues are fine to use in large, distributed systems, if done right of course (whether they are traditional generic things or application-specific) | 20:47 |
*** burris has joined #openstack | 20:47 | |
eday | creiht: in fact, they are kind of required :) | 20:47 |
eday | Ryan_Lane: Yeah, we had a number of discussions of how to break the clusters out at the last summit, as well as experience with how rackspace does it now | 20:48 |
eday | Ryan_Lane: need to work with network/switch/data center/... boundaries, which give natural partitioning | 20:48 |
*** cloudmeat1 has joined #openstack | 20:48 | |
Ryan_Lane | ah, yeah. that makes a lot of sense | 20:49 |
vishy | eday: right now we're running multiple api nodes with HAProxy in front | 20:50 |
pvo | Ryan_Lane: We're looking at things at a regional level, then trying to figure out how to logically segment it at the level eday was mentioning | 20:50 |
pvo | we (rackspace) will have to do a lot of work around this soon | 20:51 |
vishy | and monit managing them | 20:51 |
*** cloudmeat has quit IRC | 20:51 | |
vishy | we are still seeing greenthread duplicate reads occasionally | 20:51 |
vishy | which pretty much kills the api | 20:51 |
eday | vishy: reads on what? DB? queue? ...? | 20:54 |
*** kevnfx has joined #openstack | 20:54 | |
zykes- | what's recommended | 20:54 |
zykes- | MySQL or PostgreSQL ? | 20:54 |
dendrobates | meeting in 5 min in #openstack-meeting | 20:55 |
zykes- | is it okay to participate even though i'm not in a dev team ? | 20:56 |
creiht | zykes-: indeed... it is open to everyone | 20:56 |
* zykes- joins | 20:57 | |
dendrobates | it will be extra short today though | 20:57 |
dendrobates | this time I really mean it | 20:57 |
creiht | dendrobates: you are jinxing it just by saying that :) | 20:58 |
vishy | vishy: not exactly sure where, we didn't have time to debug it, so the haproxy/monit was a workaround | 20:58 |
zykes- | for a ten node setup should i do nova services on 3-4 standalone machines and then the rest compute - for documentational pruposes? | 20:58 |
zykes- | purposes | 20:58 |
vishy | eday: ^^ | 20:58 |
vishy | i'm talking to myself a lot today :( | 20:58 |
eday | vishy: hehe | 20:58 |
vishy | eday: i think it was happening when a call would fail in the middle | 20:59 |
vishy | rpc.call that is | 20:59 |
eday | vishy: if you can get a stacktrace with the module, that would be good. Then we'd know where to pool the connections | 20:59 |
eday | vishy: ahh, ok | 20:59 |
zykes- | what's haproxy for ? | 20:59 |
vishy | eday: if i see it again i'll try and get a stack trace | 20:59 |
eday | cool | 21:00 |
*** pothos has quit IRC | 21:01 | |
*** pothos has joined #openstack | 21:01 | |
*** pothos has quit IRC | 21:03 | |
*** pothos has joined #openstack | 21:04 | |
*** morfeas has joined #openstack | 21:07 | |
*** cloudmeat has joined #openstack | 21:14 | |
*** ddumitriu has joined #openstack | 21:15 | |
*** niks has joined #openstack | 21:15 | |
niks | hiya all | 21:15 |
vishy | niks: hi, most of us are distracted in release meeting | 21:16 |
niks | hi vishy..is it in the same channel or its on different channel? can i join? | 21:17 |
vishy | openstack-meeting | 21:17 |
*** cloudmeat1 has quit IRC | 21:17 | |
niks | thanks vishy | 21:18 |
mtaylor | creiht: release looks good - for the debian package I'm going to need to set the version to 1.0.99+1.1.0rc1 because of the way version numbering works ... 1.1.0rc1 is treated as greater than 1.1.0, so if someone installs 1.1.0rc1 from the PPA, it would not be considered an upgrade to go to 1.1.0 | 21:21 |
mtaylor | creiht: it's a normal enough debian packaging thing to need to do - I just wanted to give you a heads up about it | 21:21 |
creiht | mtaylor: ok that is good to know for future | 21:22 |
*** jrussellnts_ has joined #openstack | 21:23 | |
*** jakedahn has quit IRC | 21:27 | |
zykes- | prefered to use for nova - mysql or pgsql ? | 21:29 |
grizzletooth | zykes: mysql - easier to scale | 21:30 |
*** jdarcy has quit IRC | 21:35 | |
zykes- | so dendrobates, vmware support - anything that's been requested ? | 21:43 |
dendrobates | zykes-: not on the horizon, but it someone writes it we'll take it. | 21:46 |
vishy | dendrobates: never mind on the soren bug | 21:46 |
dendrobates | ok | 21:47 |
vishy | dendrobates: looks like soren fixed it already and we just don't have it in deploy yet | 21:47 |
dendrobates | ah | 21:47 |
vishy | sneaky :) | 21:47 |
*** morfeas has quit IRC | 21:49 | |
uvirtbot | New bug: #663551 in nova "OpenStack API does not give JSON/XML output for 500s" [High,New] https://launchpad.net/bugs/663551 | 21:51 |
*** pvo_ has joined #openstack | 21:53 | |
*** pvo_ has quit IRC | 21:53 | |
*** pvo_ has joined #openstack | 21:53 | |
*** ChanServ sets mode: +v pvo_ | 21:53 | |
*** DubLo7 has joined #openstack | 21:53 | |
*** pvo has quit IRC | 21:56 | |
*** jakedahn has joined #openstack | 21:57 | |
*** pvo_ has quit IRC | 21:57 | |
*** ppetraki has quit IRC | 22:01 | |
*** iammartian has quit IRC | 22:01 | |
*** mdomsch has quit IRC | 22:02 | |
*** jakedahn has quit IRC | 22:03 | |
*** netbuzzme has left #openstack | 22:05 | |
uvirtbot | New bug: #663559 in nova "pip-depends lists incorrect python-boto version" [Critical,Confirmed] https://launchpad.net/bugs/663559 | 22:06 |
*** miclorb_ has joined #openstack | 22:10 | |
*** jakedahn has joined #openstack | 22:10 | |
dendrobates | anyone care to review my 2 byte change: https://code.launchpad.net/~dendrobates/nova/lp663559/+merge/38892 | 22:10 |
_0x44 | I dunno... those look like expensive bytes. | 22:13 |
_0x44 | It's approved | 22:14 |
*** johnpur has quit IRC | 22:14 | |
_0x44 | And with that, I'm going to bed. Night openstack kids. | 22:14 |
vishy | fastest approval ever? | 22:14 |
vishy | dendrobates: the other bug/patch i submitted | 22:15 |
vishy | don't know if it is critical for release | 22:15 |
vishy | it interferes a bit with usability but it isn't exactly a bug | 22:15 |
vishy | s/bug/showstopper bug/ | 22:15 |
*** Ryan_Lane has quit IRC | 22:18 | |
dendrobates | _0x44: are you in Italy? | 22:19 |
dendrobates | vishy: what is the bug # | 22:20 |
dendrobates | bug #663568 | 22:21 |
uvirtbot | Launchpad bug 663568 in nova "Instance sizes aren't set to meaningful defaults" [Undecided,New] https://launchpad.net/bugs/663568 | 22:21 |
dendrobates | that one | 22:21 |
*** morfeas has joined #openstack | 22:21 | |
uvirtbot | New bug: #663565 in swift "No port in default_cluster_url causes error" [Critical,New] https://launchpad.net/bugs/663565 | 22:21 |
*** DubLo7 has left #openstack | 22:22 | |
vishy | https://bugs.launchpad.net/nova/+bug/663437 | 22:22 |
uvirtbot | Launchpad bug 663437 in nova "The root drive for an instance is always the size of the image instead of being resized" [Undecided,New] | 22:22 |
*** Ryan_Lane has joined #openstack | 22:22 | |
dendrobates | hmm | 22:23 |
dendrobates | it is annoying, not critical, but with a pretty simple fix | 22:24 |
vishy | just posted a fix to the 663568 as well | 22:24 |
dendrobates | vishy: is this something you have already fixed in production in Nebula? | 22:25 |
dendrobates | 663437 I mean? | 22:25 |
*** Ryan_Lane has quit IRC | 22:25 | |
vishy | yes both of those | 22:25 |
uvirtbot | New bug: #663568 in nova "Instance sizes aren't set to meaningful defaults" [Undecided,New] https://launchpad.net/bugs/663568 | 22:26 |
*** tonywolf has quit IRC | 22:27 | |
vishy | i'm going through and trying to backport any bug fixes, and i think that is all of them | 22:27 |
dendrobates | is anyone else around to review? eday? | 22:28 |
*** jakedahn has quit IRC | 22:28 | |
*** jakedahn has joined #openstack | 22:30 | |
*** ArdRigh has joined #openstack | 22:30 | |
gundlach | i'll take a look | 22:31 |
*** jakedahn has quit IRC | 22:31 | |
*** gzmask has quit IRC | 22:32 | |
*** jakedahn has joined #openstack | 22:32 | |
gundlach | vishy: did you mean to remove c1.medium? | 22:33 |
gundlach | (obviously you did as you moved the }, just checking that that was kosher) | 22:34 |
vishy | we did cuz it seems a bit redundant | 22:34 |
gundlach | k. | 22:34 |
*** jakedahn has quit IRC | 22:34 | |
gundlach | 663568 approved | 22:34 |
*** jakedahn has joined #openstack | 22:35 | |
gundlach | 663437 approved | 22:40 |
*** ianweller has joined #openstack | 22:40 | |
*** kevnfx has quit IRC | 22:42 | |
dendrobates | mtaylor: around? | 22:43 |
*** daleolds has quit IRC | 22:45 | |
vishy | nice | 22:46 |
*** jrussellnts_ has quit IRC | 22:46 | |
jaypipes | hmm | 22:51 |
*** ddumitriu has quit IRC | 22:53 | |
jaypipes | sirp, gundlach: fixes are in for https://code.launchpad.net/~jaypipes/nova/glance-image-service/+merge/38433 | 22:58 |
*** cloudmeat has quit IRC | 22:58 | |
gundlach | thanks. sirp approved 10 secs ago :) | 22:58 |
*** cloudmeat has joined #openstack | 22:58 | |
*** cloudmeat has quit IRC | 23:03 | |
*** kevnfx has joined #openstack | 23:04 | |
gundlach | jaypipes: approved. | 23:04 |
jaypipes | gundlach: ah, cheers. | 23:04 |
*** masumotok has joined #openstack | 23:06 | |
* dendrobates wanders off to have some dinner, will be back in a bit. | 23:09 | |
*** kevnfx has quit IRC | 23:12 | |
*** jonwood_ has joined #openstack | 23:16 | |
*** pharkmillups has quit IRC | 23:19 | |
*** jonwood has quit IRC | 23:19 | |
*** pvo has joined #openstack | 23:22 | |
*** ChanServ sets mode: +v pvo | 23:22 | |
*** pvo has quit IRC | 23:27 | |
*** rlucio has quit IRC | 23:34 | |
*** jakedahn has quit IRC | 23:35 | |
*** daleolds has joined #openstack | 23:36 | |
*** cloudmeat has joined #openstack | 23:37 | |
*** johnbergoon has left #openstack | 23:40 | |
gundlach | dendrobates, jaypipes, vishy: want to review https://code.launchpad.net/~gundlach/nova/lp663551/+merge/38900 which fixes bug #663551? | 23:42 |
uvirtbot | Launchpad bug 663551 in nova "OpenStack API does not give JSON/XML output for 500s" [High,Fix committed] https://launchpad.net/bugs/663551 | 23:42 |
jaypipes | gundlach: yup, will do. | 23:42 |
gundlach | tx | 23:42 |
gundlach | dendrobates: if you think this should go into Austin, please mark as Approved -- I'm not going to touch the Status. | 23:42 |
vishy | not sure how useful a review from me is going to be, I haven't really looked at the openstack api at all | 23:43 |
xtoddx | dendrobates: proposed fix for https://bugs.launchpad.net/nova/+bug/653344 | 23:44 |
gundlach | vishy: fine enough | 23:44 |
uvirtbot | Launchpad bug 653344 in nova "Image downloading should check project membership and publicity settings" [Critical,In progress] | 23:44 |
vishy | xtoddx: you forgot () | 23:45 |
vishy | ninja patch it quick | 23:45 |
vishy | :) | 23:46 |
*** pvo has joined #openstack | 23:46 | |
*** pvo has quit IRC | 23:46 | |
*** pvo has joined #openstack | 23:46 | |
*** ChanServ sets mode: +v pvo | 23:46 | |
*** pvo has quit IRC | 23:48 | |
xtoddx | a) i miss ruby, b) need more tests, i guess | 23:52 |
xtoddx | vishy: where? | 23:54 |
vishy | xtoddx: raise exception.NotAuthorized() | 23:54 |
*** sanjib has joined #openstack | 23:55 | |
xtoddx | none of the raise NotAuthorized have them, should they all? | 23:55 |
vishy | o rly | 23:55 |
vishy | in theory you're supposed to construct the exception when you raise it yes | 23:56 |
*** joearnold has quit IRC | 23:56 | |
xtoddx | vishy: fixed that file all over, committing now | 23:57 |
vishy | cool thx | 23:58 |
*** jonwood_ has quit IRC | 23:59 | |
*** metoikos has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!