Tuesday, 2010-10-19

vishynon-sdk version00:00
Ryan_Lanecloudfusion was the old library00:00
vishyi mean no one else has written a php library?00:00
Ryan_Laneit is now the official version ;)00:00
vishyi see00:00
Ryan_LaneI'm going to try the old version and see if it works00:00
Ryan_Lanethough they dropped all the damn documentation00:00
vishyit might be helpful to find the exact xml being sent in00:01
Ryan_LaneI may be able to get that00:01
vishyi'm guessing there might be an extra field or two that should be ignored in the signature calculation00:01
vishyi'm heading out for the moment, but I would check the xml sent in by that vs euca tools for the same command and check for extra fields.  We might need to add a couple of ignore's in the sig calc.00:03
Ryan_Laneany easy way to get the xml?00:04
vishysure stick a logging statement in the api00:04
vishyit printed out at one point00:05
Ryan_Laneah ok00:05
vishymethod in api/ec2/__init__.py00:07
vishyline 10000:07
Ryan_Laneawesome. thanks00:07
Ryan_Laneit looks like the old sdk works00:07
vishythe request params are all there00:08
Ryan_Laneat least from the signature POV00:08
vishyperhaps the ignore list is ignoring one that it shouldn't00:08
vishyor another one needs to be added to the ignore list00:08
Ryan_Laneif I can figure it out, I'll submit a bug with a fix00:08
vishyif you could print out all the request commands, you'll probably see a mismatch in the api vs boto00:09
vishycool thanks00:09
vishygood luck!00:09
Ryan_Lanethanks :)00:09
*** arcane has joined #openstack00:10
*** mtaylor has joined #openstack00:13
*** mtaylor has joined #openstack00:13
*** ChanServ sets mode: +v mtaylor00:13
*** kashyapc has quit IRC00:16
*** kashyapc has joined #openstack00:17
*** sanjib has quit IRC00:19
*** Grizzletooth_ has joined #openstack00:19
*** kevnfx has joined #openstack00:19
*** Grizzletooth has quit IRC00:22
*** kashyapc has quit IRC00:27
*** daleolds has quit IRC00:27
*** kashyapc has joined #openstack00:28
*** DubLo7 has joined #openstack00:29
*** dysinger has quit IRC00:30
*** kashyapc has quit IRC00:34
*** metcalfc has quit IRC00:48
*** kashyapc has joined #openstack00:51
*** ar1 has joined #openstack00:54
*** Grizzletooth has joined #openstack01:00
*** joearnold has quit IRC01:02
*** jc_smith has quit IRC01:02
*** Grizzletooth is now known as Grizzletooth_01:05
*** Grizzletooth_ has quit IRC01:07
*** Grizzletooth has joined #openstack01:07
*** Grizzletooth has joined #openstack01:08
*** hazmat has quit IRC01:09
*** Grizzletooth has quit IRC01:19
*** huz has joined #openstack01:20
Ryan_Lanevishy: the aws sdk, in string_to_sign, is sending <hostname>, but the api is expecting <hostname>:<port>01:22
*** gustavomzw has quit IRC01:34
*** abecc has joined #openstack01:39
Ryan_Lanevishy: I'm thinking this may actually be a bug in the aws framework. they send one thing, and sign another01:39
*** abecc has quit IRC01:39
*** schisamo has quit IRC01:40
*** iammartian_ has joined #openstack01:43
*** gaveen has joined #openstack02:09
*** gaveen has joined #openstack02:09
*** daleolds has joined #openstack02:15
*** silassewell has joined #openstack02:22
*** sophiap has quit IRC02:24
huzhi, i came up some problems when tried to config multi server.02:28
huzwhen i user swift-auth-add-user, the command line say:Update failed: 503 service Unavailable02:30
huzthe syslog : Oct 19 10:32:12 swift01 auth-server ERROR Unhandled exception in ReST request: #012Traceback (most recent call last):#012  File "/home/huz/swift/trunk/swift/auth/server.py", line 605, in handleREST#012    response = handler(req)#012  File "/home/huz/swift/trunk/swift/auth/server.py", line 467, in handle_add_user#012    create_account_admin, create_reseller_admin)#012  File "/home/huz/swift/trunk/swift/auth/server.py", line 330, in create_user#01202:33
huz   account_hash = self.add_storage_account()#012  File "/home/huz/swift/trunk/swift/auth/server.py", line 198, in add_storage_account#012    (token, time()))#012  File "/home/huz/swift/trunk/swift/common/db.py", line 81, in execute#012    return self._timeout(lambda: sqlite3.Connection.execute(#012  File "/home/huz/swift/trunk/swift/common/db.py", line 74, in _timeout#012    return call()#012  File "/home/huz/swift/trunk/swift/common/db.py", line 82, in <lambd02:33
huza>#012    self, *args, **kwargs))#012OperationalError: attempt to write a readonly database02:33
*** sophiap has joined #openstack02:34
*** taihen has quit IRC02:36
*** taihen has joined #openstack02:37
creihthuz: sounds like a permission problem?02:39
Ryan_Lanevishy: sent a patch, in case others ask: http://developer.amazonwebservices.com/connect/thread.jspa?threadID=5311702:40
*** taihen has quit IRC02:44
*** taihen has joined #openstack02:44
huzcreiht: the proxy service and the Ring only need to start in only 1 server, is that correct?02:45
*** gundlach has quit IRC02:46
creihtI would check the ownership/permissions of the mounts on the storage nodes02:46
huzok, i'll check that.Another question is about the step 10 in SAIO02:52
huzit's for devVM, in my situation(5 storage nodes), do i need to configure /etc/rsyncd.conf in one server or in all 5 servers?02:53
*** krish has joined #openstack02:53
creihtall 5 servers will need an rsyncd.conf02:54
creihtbut it would be better to reference the rsyncd.conf in swift/etc/rsyncd.conf.sample02:54
creiht(from the source tree)02:54
creihtthe rsyncd.conf from the saio is setup specifically to handle the fact that you are running 4 nodes simultaneously on the same machine (which is not the norm)02:55
huzahh, my rsyncd.conf configurations seem ok, thanks for confirmation.02:57
*** jc_smith has joined #openstack03:24
*** kashyapc has quit IRC03:39
*** joearnold has joined #openstack03:53
*** sophiap has quit IRC03:54
*** sirp1 has quit IRC04:06
DesiJatgive him the once over04:10
*** exlt has quit IRC04:13
*** kashyapc has joined #openstack04:13
*** exlt has joined #openstack04:14
*** ChanServ sets mode: +v exlt04:14
*** silassewell has quit IRC04:15
*** kashyapc has quit IRC04:18
*** kashyapc has joined #openstack04:18
*** Sanjib has joined #openstack04:23
*** Sanjib has quit IRC04:23
*** exelanz has joined #openstack04:24
*** Grizzletooth has joined #openstack04:25
exelanzif someone knows of a good resource to understand the load balancing capabilities of nova,04:25
exelanzplease guide me to that.04:25
*** exelanz has quit IRC04:28
*** ArdRigh has quit IRC04:29
*** joearnold has quit IRC04:49
*** joearnold has joined #openstack04:51
*** f4m8_ is now known as f4m804:51
*** miclorb_ has quit IRC05:07
*** DubLo7 has quit IRC05:22
*** kevnfx has quit IRC05:27
*** jc_smith has quit IRC05:35
*** ArdRigh has joined #openstack05:36
*** daleolds has quit IRC05:37
*** suchitp has joined #openstack05:40
*** suchitp has left #openstack05:50
*** Grizzletooth has quit IRC05:52
*** joearnold has quit IRC05:53
*** miclorb has joined #openstack05:58
*** iammartian_ has quit IRC06:01
*** krish has quit IRC06:19
*** Cybo has joined #openstack06:22
*** Cybo has quit IRC06:24
*** arthurc has joined #openstack06:25
*** Cybodog has quit IRC06:25
*** Cybodog has joined #openstack06:26
*** BK_man has quit IRC06:27
*** ibarrera has joined #openstack06:29
*** Cybodog has quit IRC06:31
*** Cybodog has joined #openstack06:32
*** krish has joined #openstack06:32
*** Cybodog has quit IRC06:33
*** coolhandluke has joined #openstack06:47
*** miclorb has quit IRC07:00
*** omidhdl has joined #openstack07:17
*** gaveen has quit IRC07:19
*** calavera has joined #openstack07:27
*** coolhandluke has quit IRC07:37
*** coolhandluke has joined #openstack07:42
*** miclorb has joined #openstack07:43
*** allsystemsarego has joined #openstack07:52
*** allsystemsarego has joined #openstack07:52
*** kakoni has joined #openstack08:20
*** befreax has joined #openstack08:24
*** kashyapc has quit IRC08:28
*** irahgel has joined #openstack08:28
*** coolhandluke has quit IRC08:31
*** hornbeck has quit IRC08:47
*** befreax has quit IRC08:50
*** befreax has joined #openstack08:52
*** ptremblett has quit IRC08:55
*** hornbeck has joined #openstack08:56
*** kakoni has quit IRC08:56
zykes-alekibango: around ?09:00
*** tonywolf has joined #openstack09:01
alekibangozykes-: yes09:12
zykes-any documentation to follow or to use as a basis when doing a multinode install ?09:13
alekibangothis is best atm09:14
alekibangoplz help editing09:14
zykes-when you mean multindoe09:17
zykes-node, you mean splitting up all the diff comps ?09:17
*** tonywolf has left #openstack09:17
*** gaveen has joined #openstack09:25
*** gaveen has joined #openstack09:25
alekibangomultinode means installing on 2+ hardware servers09:28
alekibangoi would like to cover examples for 4 servers in one cluster  -- and 30+ servers in different clusters09:28
alekibangosingle server install of nova is much easier :)09:29
*** BK_man has joined #openstack09:30
*** huz has quit IRC09:32
*** jonwood has joined #openstack09:33
*** tonywolf has joined #openstack09:36
zykes-alekibango: i can try to do a virtual install of a 4 ndoe one09:36
alekibangoi will try too09:37
alekibangonow i am drawing pictures of nova arch, :)09:37
zykes-isn't there that allready ?09:38
zykes-the graph that came earlier.09:38
arthurcHi alekibango, can we install compute nodes on virtual machines ? are they not running kvm ?09:41
jonwoodarthurc: They'll work with UML as well, or a fake virtulisation driver.09:42
alekibangoyes if you will use QEMU or UML09:42
jonwoodAnd good morning :)09:42
alekibangoits noon here :)09:42
alekibangobut have nice 24/7 :)09:42
*** matclayton has joined #openstack09:47
*** Robi__ has joined #openstack09:49
*** jonwood has quit IRC09:49
*** miclorb has quit IRC09:49
*** klumpie has quit IRC09:49
*** morfeas has quit IRC09:49
*** seats has quit IRC09:49
*** xtoddx has quit IRC09:49
*** eday has quit IRC09:49
*** sandywalsh has quit IRC09:49
*** gholt has quit IRC09:49
*** creiht has quit IRC09:49
*** brainproxy has quit IRC09:50
*** Robi_ has quit IRC09:50
*** mattt has quit IRC09:50
*** karmabot` has quit IRC09:50
*** karmabot has joined #openstack09:50
*** _morfeas has joined #openstack09:50
*** creiht_ has joined #openstack09:50
*** jonwood has joined #openstack09:50
*** mattt has joined #openstack09:50
*** miclorb_ has joined #openstack09:50
*** klumpie has joined #openstack09:50
*** seats has joined #openstack09:50
*** eday has joined #openstack09:51
*** xtoddx has joined #openstack09:51
*** gholt has joined #openstack09:53
zykes-can one manage san disks with openstack ?09:55
jonwoodzykes-: What sense?09:56
jonwood*In* what sense?09:57
*** jonwood has quit IRC10:02
*** jonwood has joined #openstack10:04
*** sandywalsh has joined #openstack10:05
*** brainproxy has joined #openstack10:07
zykes-jonwood: in the sense of managing disks.10:07
zykes-or storage on nodes10:07
jonwoodAs far as I can see at the moment the only supported storage method for disks is on your filesystem as disk images.10:08
jonwoodBut there's no reason you can't mount SAN storage and tell it to put disk images there.10:08
alekibangoi would think sheepdog is interesting10:08
jonwoodalekibango: It is if you don't already have a SAN installed.10:09
jonwoodIf you've spend tens of thousands of dollars on storage already, you'll want to use that ;)10:09
alekibangoi do not want to spend those10:09
alekibangoi would like to avoid the server tax10:09
alekibangoif you know what i mean10:10
alekibango.. but still have it nice, fast, reliable10:10
zykes-how to best use shared storage now then ?10:11
jonwoodzykes-: Mount it as part of your filesystem10:12
alekibangozykes-: i would think glusterfs is fitting for image store -- but might be slow for live (used) disks10:12
alekibangozykes-: i think for instance disks it will be best to implement sheepdog support for KVM (qemu) hypervizors10:13
zykes-doesn't anyone here use san for disks ?10:14
alekibangojdarcy will be here in 4 hours, ask him :)10:14
alekibangozykes-: they for sure do10:14
zykes-how is that implemented then i wondear10:14
alekibangozykes-: for now, disk for instances are created on local disks using lvm -- am i right ?10:15
jonwoodzykes-: As I said, you mount storage from your SAN as part of the filesystem.10:15
jonwoodalekibango: No. At the moment instance disks are created as files wherever you tell nova-compute to put them.10:16
alekibangozykes-: there are big problems with storage those days...  try googling 'silent data corruption' :)10:16
alekibangojonwood: thanks, for now i used only fake storage10:17
*** gustavomzw has joined #openstack10:17
alekibangothat will change soon10:17
zykes-alekibango: yeah, that's the way i've been doing stuff lately10:20
zykes-what's the use of Sheepdog if you use a san ?10:22
jonwoodzykes-: There isn't one really.10:23
*** befreax has quit IRC10:23
zykes-i guess sheepdog is more useful to distribute data across nodes ..10:23
alekibangoyes, san can be less effective in resource usage :)10:23
alekibango(depends on arch)10:23
jonwoodalekibango: I doubt sheepdog is going to get close to the performance of a decent SAN.10:24
zykes-it reminds me of Panasas10:24
zykes-in some way or "pnfs"10:24
zykes-yeah, hpc storage10:24
zykes-or i mean, in the way of replication etc10:24
alekibangopnfs sucks :)10:24
zykes-heh, well it's easy to administer and gives high performance on clusters + storage cap.10:25
alekibangozykes-: maybe :)10:26
zykes-like add x tb of storage, add a shelf and do one command and you're done.10:26
alekibangohow it performs with large files (disk images used by vguests)?10:27
zykes-it's not for that kind of usage as far as I know10:28
zykes-it's meant for HPC clusters10:28
alekibangowell, that means sheepdog might be much faster :)10:28
alekibango... i didnt benchmark it10:29
alekibangojsut guessing10:29
alekibangofrom the arch10:29
alekibangoone sheepdog cluster for a cluster of nova servers -> happy10:29
*** miclorb_ has quit IRC10:41
zykes-alekibango: how would you recon one should setup a 4+ node cluster ?10:48
jonwoodzykes-: Why specifically 4 nodes?10:49
alekibangozykes-: he would add servers to his cluster :)10:49
alekibangohaving around 20 or more he should really think about redesign10:49
jonwoodalekibango: Yes, but anything over a single node is the same process.10:49
alekibangojonwood: well, no its not10:49
jonwoodIt's just a matter of which machines run which services.10:49
alekibangoit will have different architectures10:50
alekibangoand different problems10:50
alekibangodifferent bottlenecks10:50
jonwoodYes it is. You might end up having a MySQL cluster instead of a single database server, and failover, but the architecture is the same.10:50
alekibangoi dont think so.10:50
alekibangofirst, he will have more clusters10:50
alekibangonot only one10:50
alekibangothat makes big difference itself10:51
alekibangosecond - he will have different needs10:51
jonwoodWhy would you have you more clusters?10:51
alekibangobecouse of need to make it more reliable and secure -- by having more providers in more countries10:51
alekibangoor whatever10:51
alekibangojonwood: limits on network bandwidth10:52
alekibangoyou will need to split it in some scale10:52
alekibangoto make it effective10:52
jonwoodThat entirely depends on the situation it's being used in.10:52
alekibangoyou will bring in more interfaces, different technologies10:52
alekibangojonwood: yes, but for 4 servers, we can keep requirements low10:53
alekibangoand they can be installed in a way that gives some reliability10:53
zykes-alekibango: can't one use a distributed database instead of mysql ?10:53
alekibangozykes-: i would use mysql cluster of postgresql cluster from the start10:54
jonwoodzykes-: The main database needs to be SQL based at the moment.10:54
alekibangomysql and postgresql can be clustered10:54
zykes-something like drizzle could be used ro ?10:55
alekibangozykes-: i am not sure about implementation10:55
jonwoodzykes-: Probably.10:55
alekibangoi am more inclined to believe in postgresql :)10:55
jonwoodzykes-: I'm not sure that anything supports splitting reads and writes yet though, so you might need to put mysql proxy in front of it.10:56
alekibangothe only requirement is to have sqlalchemy working with the db10:56
jonwoodAlthough it might be possible to just point your cloud controller at the master, and have everything else running off the slaves.10:56
jonwoodI'm not sure which components do writes to the database.10:56
zykes-jonwood: mysql proxy in front of what ?10:57
jonwoodYour database cluster.10:57
alekibangomysql can be clustered, having servers on each node10:58
alekibango-- for 4 servers10:58
zykes-alekibango: ndb cluster10:58
jonwoodUnless Drizzle is a proper distributed database now, I'm not sure.10:58
zykes-you mean or ?10:58
alekibangofor 30+ you will need another arch10:58
alekibangozykes-: yes10:58
alekibangothats what i mean by 4 servers and 30+ servers, there are different needs, really :)10:58
alekibangomost  small companies will use 4-6 servers10:59
alekibangofor private clouds10:59
alekibangobig ones will use 30+ schema10:59
*** dizz has joined #openstack10:59
*** dizz is now known as dizz|away10:59
zykes-pr cluster then i guess ?11:00
zykes-per cluster sorry11:00
alekibangowell, there is a need for central DB, or am i mistaken?11:00
alekibangoat least for users if you will use it11:01
alekibangoand having 500+ servers you will have another needs11:01
alekibangobut you should have enough resources to figure that out11:01
jonwoodI still don't get where you're getting that information from. As far as I'm aware no one but NASA *has* a 500+ server nova cloud right now.11:02
jonwoodSo any declarations that you'll need one setup or another are just guesses.11:02
alekibango:) yes they are :)11:02
alekibangobut noone oposes -> that means maybe i am right11:03
jonwoodIn that case, start simple, see where problems occur, fix them, and repeat.11:03
alekibangoone of my fav. ways to find out truth is to claim something and wait for reaction :)11:03
alekibangojonwood: yes, thats why we start with 4 servers11:03
jonwoodStarting out with a multi-server database cluster because it *might* be where you see problems is just throwing away money.11:03
zykes-how should one cluster pgsql then ?11:03
alekibangozykes-: there are more ways now. pg 9.x is out11:03
alekibangoand i feel i am not the authority to decide this :) as i didnt test it irl11:04
zykes-so for a 10+ node setup you should test clustering ?11:05
zykes-of sql resources11:05
jonwoodzykes-: Personally I doubt you'll need it, except for failover if your primary database dies.11:05
alekibangojonwood: i dont say there are big differences... but there are some :)11:06
alekibangozykes-: for 10 i would use one cluster mysql -- if network will be ok11:06
alekibangos/mysql// from last line11:06
zykes-hacluster then ?11:07
jonwoodThat's probably what I'd do. master/master replication between a pair of MySQL servers with failover.11:08
jonwoodAnd then start adding slaves if you see performance problems.11:08
alekibangopair, failover - that sounds dangerous to me :)11:09
jonwoodYes. Almost as dangerous as running without any way of recovering from failure of a core part of your cluster.11:10
alekibangoi would use all 4 servers in that 4 server setup11:10
alekibangoif possible, in synchronnous way  -- but having fast reads11:10
zykes-alekibango: for a sql cluster ?11:11
alekibangobut i am not sure how, as i never did that. i always used single server installs for now :)11:12
*** dizz|away has quit IRC11:22
*** dendro-afk is now known as dendrobates11:29
*** ArdRigh has quit IRC11:31
*** ctennis has quit IRC11:33
*** ctennis has joined #openstack11:53
*** ctennis has joined #openstack11:53
*** stewart has quit IRC11:58
*** coolhandluke has joined #openstack12:07
*** hazmat has joined #openstack12:14
dendrobatesjaypipes: I agree12:14
*** coolhandluke has quit IRC12:14
jaypipesdendrobates: :)12:15
* jonwood goes back to invoicing12:18
*** ratasxy has joined #openstack12:23
*** westmaas has joined #openstack12:23
ratasxyhello, what is the date for the final release of openstck12:24
jaypipesdendrobates: ^^12:24
*** gundlach has joined #openstack12:27
*** stewart has joined #openstack12:28
*** cloudmeat has joined #openstack12:30
*** cloudmeat1 has quit IRC12:33
*** omidhdl has left #openstack12:36
alekibangoratasxy: 21st12:39
alekibangobut that will be initial, not final12:39
alekibangofinal release will be when openstack will die12:40
*** kashyapc has joined #openstack12:40
alekibangodendrobates: i hope i am right :)12:41
*** omidhdl1 has joined #openstack12:41
*** dizz has joined #openstack12:43
*** dizz is now known as dizz|away12:43
zykes-alekibango: what kind of cluster ?12:44
zykes-for pgsql12:45
*** ratasxy has quit IRC12:45
zykes-cause i guess if you want performance you need to set some nodes read only ?12:45
*** al-maisan has joined #openstack12:46
*** befreax has joined #openstack12:51
*** befreax has left #openstack12:51
*** befreax has joined #openstack12:54
*** jpdion has joined #openstack12:57
*** aliguori has joined #openstack13:08
*** DubLo7 has joined #openstack13:08
jaypipesgrrr, I think I'm coming down with a cold... :(13:19
jaypipeshaven't had one in over a year...13:19
*** schisamo has joined #openstack13:25
alekibangozykes-: imho nova will not load the DB that much.13:28
alekibangoi would be rather concerned about consistency and availability13:29
*** DubLo7 has quit IRC13:30
crazedanyone have good info on using multiple compute nodes?13:31
crazedi'm on ubuntu 10.10 so things are in teh repos13:31
alekibangocrazed: its teh topic of the day13:31
alekibangocrazed: http://etherpad.openstack.org/NovaMultinodeInstall13:31
alekibangoyesterday we started documenting it13:32
crazedhow perfect13:32
crazedi've got some test servers that we were going to use for cloudstack13:32
crazedbut the networking for it is pretty complicated13:33
alekibangocrazed: yes, help us write those docs13:33
alekibangoand put there your thoughts/questions13:33
*** krish has quit IRC13:33
crazedif i follow the steps for doing it on one node first, is that going to cause problems when trying to add the others13:34
alekibangocrazed: yes you will need database13:35
alekibangoand network setup13:35
alekibangoetc, see the doc13:35
crazedslowly reading through it13:35
alekibangothere are 2 docs right now and some questions around13:37
*** omidhdl1 has left #openstack13:38
*** befreax has left #openstack13:38
*** jdarcy has joined #openstack13:41
crazedah yeah the bottom part seems closer to what i'll be doing13:42
crazedcuz they're using the PPA13:42
alekibangocrazed: first release is in 2 days13:43
alekibangodocumentation is behind  a bit, please help :)13:43
crazedi will help as soon as i get into the installation bits13:43
crazedopenstack does vlan tagging?13:44
openstackcrazed: Error: "does" is not a valid command.13:44
*** dendrobates is now known as dendro-afk13:45
*** pvo has joined #openstack13:46
*** ChanServ sets mode: +v pvo13:46
alekibangoopenstack: rename yourself13:47
openstackalekibango: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.13:47
*** jpdion has quit IRC13:47
alekibangocrazed: it can use some kind of vlans, not sure about details, see source :)13:48
pvoalekibango: openstack bot getting snarky?13:48
crazedideally should be able to create separate "solutions" where each solution would be in a private vlan separated from the others13:50
crazedi've got a layer 3 switch that supports the vlan tagging, but i'll start with something basic for now13:51
*** mdomsch has joined #openstack13:51
alekibangocrazed: it can do it without the tagging switch13:53
alekibangousing virtual lans13:53
*** f4m8 is now known as f4m8_13:54
crazedbut then one of your nodes would have to be the gateway? right?13:54
alekibangocrazed:  http://nova.openstack.org/architecture.html13:54
crazedATAoE hm13:55
crazedseems pretty similar to eucalyptus's layout13:55
*** dendro-afk is now known as dendrobates13:55
alekibangowell. eucalyptus might have problems with scaling :)13:56
alekibangobut its similar13:56
alekibangocloud controllers (public api), cloud controllers, ... etc13:56
alekibangoeh  cluster controllers13:56
dendrobatespvo: who owns the openstack bot?13:56
*** BK_man has quit IRC13:57
alekibangoyou might know by the ip address?13:57
crazedopenstack: owner13:57
openstackcrazed: Error: "owner" is not a valid command.13:57
dendrobatescrazed: nova was originally written as a drop in replacement for eucalytpus13:57
crazedthis makes sense13:57
crazedi've had limited experience with eucalyptus13:58
dendrobatescrazed: we have made most eucalytpus specific design flaws optional13:58
crazedbut the scalability issues caused us to drop it13:58
*** Cybodog has joined #openstack14:00
*** mdomsch_ has joined #openstack14:01
crazeddoes the cloud controller need root access to the mysql server?14:01
crazedor does it only use one database14:02
dendrobatesthe sql server can be setup however you want.  So it is up to your DBA's14:02
*** mdomsch has quit IRC14:02
crazedi'm just looking at these docs on etherpad and they are using root.. which i'd never do14:02
alekibangocrazed: you know, people are lazy to configure it :)14:03
alekibangoand novascript from vishy should be run under root user :)14:03
crazedcan the network type be changed after initial configuration?14:05
*** Ryan_Lane has quit IRC14:05
crazed--network_manager=nova.network.manager.FlatManager, i'll probably want to move to vlans in the future14:05
dendrobatesvishy's script is for dev testing and not setting up a real environment14:05
dendrobatescrazed: not sure.  probably not with out headache if you have running vm's14:06
crazedok, i'll probably just destroy any running VMs before changing something like that14:06
dendrobatesI think it wouldn't be a big deal if you don;t have any vm's14:06
*** ppetraki has joined #openstack14:11
sorencreiht_: Kinda.14:12
sorencreiht_: Err... Never mind.14:12
sorencreiht_: I was scrolled *way* up, and the last thing I could see was you asking if I was around :)14:12
pvodendrobates: we do. ant specifically.14:14
dendrobatespvo: can we change the name from openstack to something else?14:15
pvoyea, was going to talk to ant about it.14:15
creiht_or at least be a little smarter about what it should respond to14:16
crazedAttributeError: 'FlatManager' object has no attribute 'allocate_network'14:17
crazedhm that's what i get when trying to create a new project14:17
*** creiht_ is now known as creiht14:18
*** ChanServ sets mode: +v creiht14:18
alekibangoopenstack bot should respond only on lines starting openstack:  (with :)14:20
openstackalekibango: Error: "bot" is not a valid command.14:20
*** gaveen has quit IRC14:21
pvoalekibango: don't tease the bot. :p14:22
crazedanyone run into that before?14:23
crazedtrying to create a new project14:23
*** openstack has joined #openstack14:29
*** pvo_ has quit IRC14:29
*** kevnfx has quit IRC14:31
*** kashyapc has quit IRC14:32
*** pvo_ has joined #openstack14:32
*** pvo_ has joined #openstack14:32
*** ChanServ sets mode: +v pvo_14:32
*** pvo has quit IRC14:32
*** pvo_ is now known as pvo14:32
dendrobatescrazed: if it still exists in trunk, please submit a bug14:32
crazedremoving that line from the variousu configs fixes it14:34
zykes-is the PPA the latest version ?14:34
*** HouseAway is now known as AimanA14:34
dendrobateszykes-: it is a daily build, so it is much newer.14:35
dendrobatescrazed: but why is it trying to allocate network with the flat model.  We still need to see if it exists in trunk, so we can fix it before release.14:37
*** Ryan_Lane has joined #openstack14:38
crazedah looks like the default is VlanManager14:39
*** sirp has joined #openstack14:40
crazed   /usr/bin/python /usr/bin/nova-manage project create network14:40
crazedin those etherpad docs, why does it have that14:40
crazeddoesn't seem like a valid command14:41
*** pharkmillups has joined #openstack14:41
*** pharkmillups has joined #openstack14:42
dendrobatesit should be, I think we are not testing the flat network model well enough.14:44
*** abecc has joined #openstack14:44
dendrobatesany volunteers?14:44
zykes-for what ?14:45
*** abecc has joined #openstack14:45
dendrobatestesting flat networking14:45
crazedblah i can't get nova-network or nova-scheduler to start14:45
spydendrobates: bot should be a little more quiet now14:46
crazedhow close is the PPA to ubuntu's archive14:46
dendrobatescrazed: very different, much newer.14:46
crazedshould i try that instead14:47
dendrobatescrazed: people have not been having many problems with the ubuntu packages.  which instructions are you following?14:48
crazedunder the pinkish color14:48
crazed"Setup the cloud controller first"14:48
dendrobatescrazed: are you installing on multiple systems?14:49
dendrobatesopenstack are you quieter now?14:50
crazeddendrobates: attempting to, but starting with just the cloud controller14:50
*** RexMundi has joined #openstack14:50
dendrobatesopenstack: are you quieter now?14:51
*** kashyapc has joined #openstack14:51
dendrobatesopenstack help14:51
dendrobatesit's pretty quiet14:51
crazednova.exception.NotFound: project network data has not been set14:52
dendrobatesthis feels like configuration.  I'll try it myself when I get a chance14:54
Ryan_LaneI made an opends/opendj/sun directory server version of the nova ldap schema. anywhere in particular I should be sending things like this?14:54
*** mdomsch_ has quit IRC14:54
crazedthose instructions also seem to leave out rabbitmq and redis14:55
crazedi'm checking out http://wiki.openstack.org/NovaInstall now14:55
creihtRyan_Lane: nice!14:55
crazedthey didn't change any of the defaults14:55
* creiht is a fan of the sun directory server14:56
dendrobatesredis is only needed for fake ldap, if you use a real ldap server you don't need it14:56
*** gundlach has joined #openstack14:56
Ryan_Lanecreiht: I also plan on sending in some patches for the ldap driver14:56
crazedone of those two made nova-network start properly14:56
Ryan_Lanecreiht: I like it too, but I'm really starting to favor opendj (the fork of opends)14:57
creihtRyan_Lane: doesn't that mean that it would work with the fedora directory server as well?  Or have those projects divered now?14:57
creiht(it has been a while for me)14:57
Ryan_LaneI hate openldap with a passion14:57
crazedit was rabbitmq14:57
crazedi love openldap14:57
crazedat least 2.4 and up14:57
Ryan_Lanefedora directory server is based on iplanet14:57
crazedopenldap + apache directory studio14:57
Ryan_Laneand so is sun directory server, somewhat14:57
creihtI thought they had a common ancestor or something?14:57
creihtahh ok14:58
dendrobatesRyan_Lane: commit it to the archive14:58
Ryan_Lanethe schemas should be compatible14:58
Ryan_LaneI don't have commit access14:58
creihtRyan_Lane: make a merge proposal14:58
* Ryan_Lane doesn't know bzr or launchpad :)14:58
dendrobatesRyan_Lane: have you signed the CLA?14:58
creihteither way, they both support multi-master replication, which is nice14:58
dendrobateswell wait until after the release and I will help you.14:58
Ryan_Lanethough I'm likely to send in a number of patches, so it's probably a good idea14:59
creihtthe tools for sun directory server are very nice14:59
Ryan_Laneopendj is like a java rewrite of sun directory server. it's pretty awesome14:59
creihtall that said, I don't miss having to mess with LDAP :)14:59
*** deshantm has quit IRC15:00
Ryan_Laneyeah, after release sounds good though15:00
Ryan_Laneis there any way to have instances start a VNC session when they are launched?15:02
Ryan_Laneerr vnc server15:03
*** silassewell has joined #openstack15:08
zykes-http://socializedsoftware.com/wp-content/uploads/2010/07/WebMainScreen.jpg < what gui is that ?15:09
Ryan_Lanethe ruby one15:10
Ryan_Lanelemme see if I can find it anywhere15:10
crazedValueError: IP('') has invalid prefix length (25)15:10
crazedhm aggrivating15:10
zykes-crazed: isn't that invalid ?15:10
crazed25 is definitely a valid prefix length15:10
crazedit was because of the network-size/range optoins15:12
Ryan_Lanezykes-: https://launchpad.net/openstack-web-control-panel15:12
zykes-same dude has made one for ruby and another for objectivec15:13
*** mdomsch has joined #openstack15:14
vishy244/25 can't be right15:16
*** deshantm has joined #openstack15:21
*** pvo has quit IRC15:23
crazedthis is cool15:23
crazedi've got a working cloud controller :)15:23
*** kevnfx has joined #openstack15:23
*** pvo has joined #openstack15:25
*** pvo has joined #openstack15:25
*** ChanServ sets mode: +v pvo15:25
vishycrazed: nice15:25
crazedcouldn't get teh flat networking to work though15:27
*** coolhandluke has joined #openstack15:27
*** DesiJat has quit IRC15:32
*** pvo has quit IRC15:40
*** calavera has quit IRC15:43
crazedlunch then time to get a node working15:43
crazedby the way, are there any web interfaces to openstack yet?15:47
Ryan_Lanecrazed: https://launchpad.net/openstack-web-control-panel15:48
Ryan_Laneit's not really done15:48
Ryan_Laneby any means :)15:48
crazedthat's fine with me, this is entirely for testing purposes15:49
crazedso beta software is good enough15:49
Ryan_LaneI think it's more like early alpha ;)15:49
crazedhaha even better15:49
Ryan_Laneyeah,  what's the fun in using something finished :D15:49
*** sirp has quit IRC15:50
Ryan_Lanesounds like you are a glutton for punishment, like myself15:50
*** sirp has joined #openstack15:56
*** kashyapc has quit IRC15:57
*** pvo has joined #openstack16:01
*** pvo has joined #openstack16:01
*** ChanServ sets mode: +v pvo16:01
*** pvo has quit IRC16:01
crazedhow do you handle shared storage for vms and possibly moving instances around between nodes16:03
crazedi know libvirt is capable of that, just wondering how you guys have implemented it16:03
crazedi've got some nice NFS storage attached to each of my nodes already16:03
crazedseems like nova-volume is in charge of this stuff16:04
*** fitzdsl has joined #openstack16:07
creihtmtaylor: so should we set up a lp:swift/1.1 that points at the 1.1 series?16:07
mtaylorcreiht: yeah. I think so.16:07
creihtand is that kinda like a tag with bzr/lp?16:07
mtaylorcreiht: not really - we'll want to push a completely different branch - then on lp we'll make a 'release series'16:08
mtayloror a 'series' rather16:08
mtaylorreleases always come in the context of a series in launchpad world16:08
creihtI think we have our code ready... we are testing the package building right now16:09
creihtmtaylor: so we could cut a tarball now16:09
creihthow does one do that?16:09
mtaylorpython setup.py sdist16:09
mtaylorand then you'll want to upload it to launchpad - we should make the 1.1 series first... one sec16:10
mtaylorah - there already is a series. win16:12
mtaylorso this will be 1.1.0 ?16:12
creihtis there anything we should do to mark it as an RC?16:13
crazeddamn, first attempt to launch an instance failed16:13
mtaylornot really - well... what do you mean by RC? - that this is an RC for 1.1.0? or that it's an RC for 1.1 ?16:13
mtaylorbleh. that's not good16:14
*** pvo has joined #openstack16:14
*** pvo has joined #openstack16:14
*** ChanServ sets mode: +v pvo16:14
*** kashyapc has joined #openstack16:14
creihtmtaylor: not sure16:14
creihtI would say RC for 1.1.016:15
creihtor is that not the right way to think about it?16:15
mtaylorI don't know that there is a _right_ way16:15
mtaylorbut - I certainly would not release a thing called swift-1.1.0.tar.gz and then release another thing named the same thing in a few days16:16
creihtso this is what we will be basing the release off of, but we may have to contribute bug fixes before the release16:16
creihtyeah that was my thought16:16
creihtlunch time... bbl16:16
mtayloryou could edit setup.py and change the version to 1.1.0rc or 1.1.0pre  or 1.0.99 or something16:16
*** pvo has quit IRC16:19
*** netbuzzme has joined #openstack16:23
crazedfreaking network object hm16:27
crazeddamn it, these log messages aren't that helpful hm16:31
crazed2010-10-19 12:31:06-0400 [-] AttributeError: 'NoneType' object has no attribute 'access'16:32
dendrobatesjaypipes: are you up and around?16:32
jaypipesdendrobates: I am.16:33
*** roamin9 has joined #openstack16:34
*** dysinger has joined #openstack16:36
roamin9hi ;)16:37
crazedi got a step further, but hm16:37
edaydendrobates: I've proposed https://blueprints.edge.launchpad.net/nova/+spec/distributed-data-store for the sprint, looks like it needs some sort of approval to be listed in the sprint for discussion.16:40
*** daleolds has joined #openstack16:41
dendrobatesit does, I haven;t even started that process yet16:42
mtaylorjaypipes: so I was just talking to Spamaps about that burndown report I showed you a few weeks ago16:42
crazedInstanceLimitExceeded: Instance quota exceeded. You can only run 0 more instances of this type., crap where do i set this limit?16:43
edaydendrobates: ahh, ok :)16:43
jaypipesmtaylor: ya?16:44
roamin9when i try to use swift-auth-add-user, return that error, can anyone help me ?16:44
mtaylorjaypipes: he's going to try to get the Work Item lists integrated directly into blueprints, which I think would be cool ... but I think he had a better definition of the when-to-use-blueprints vs. when-to-use-bugs (although I think the def I was tossing around still applies)16:45
mtaylorjaypipes: his def is "blueprints are planned, bugs are unplanned"16:45
mtaylorjaypipes: and then they break the blueprints down into work items by using formatted text in the whiteboard area and the burndown report16:46
roamin9this is the first time when i used swift, could you give me some tips ?16:46
jaypipesmtaylor: that would be great :) I'm already using the Work Items style in my whiteboards...16:46
mtaylorjaypipes: excellent16:47
*** silassewell has quit IRC16:47
roamin9I did not care, just need some tips16:48
dendrobatesmtaylor jaypipes: pitti has already done something like that:  http://people.canonical.com/~pitti/workitems/maverick/canonical-server-ubuntu-10.10.html16:49
dendrobateswe can talk to him at UDS about using his code16:49
jaypipesdendrobates: yes, have seen that16:50
dendrobatesis the thing you are talking about better/different?16:50
jaypipescreiht: see roamin9 ^^16:50
mtaylordendrobates: yeah, that's actually what we're talking about16:50
mtaylordendrobates: we're just talking about integrating/using it :)16:50
dendrobatescool, I agree.  I was already planning on it16:51
mtaylordendrobates: excellent.16:51
*** coolhandluke has quit IRC16:51
roamin9i just need some tips. thank u16:51
roamin9I'm new, just give me some tips. ok?16:53
*** matclayton has left #openstack16:55
jaypipesgholt, redbo: any ideas for roamin9?16:57
alekibangoroamin9: most people here are new.. developers who are not have hard time fixing bugs16:58
alekibango(as release is on 21st)16:58
mtaylordendrobates: any ideas on what we should do with swift 1.1 RC?16:59
roamin9i know, thank you16:59
alekibangoroamin9: some service is not running imho16:59
mtaylordendrobates: my personal impulse is to just release 1.1.0 as is and indicate in release notes that it's RC ... and then releaes 1.1.1 whenever16:59
alekibangoor you are not configured well to connect to right one16:59
jaypipeseday: added some notes to http://etherpad.openstack.org/DistributedDataStore.16:59
mtaylordendrobates: but I'm not sure if that's the "right" thing in the context of this project16:59
*** joearnold has joined #openstack17:03
*** rlucio has joined #openstack17:03
edayjaypipes: in response to your comments, I see the API server touching the DB, expecting to be able to see all instances inside of it. The workers also pull/update from the same central data store, no? Did I misunderstand your comments or am I missing something? :)17:06
crazeddo you *need* ldap or redis to properly run instances?17:07
crazedi was able to successfully get kernel, ramdisk, and machine instances up17:07
*** ibarrera has quit IRC17:07
crazedroot@n1:~# euca-run-instances ami-et09ff0g --kernel ami-9v62mwx4 --ramdisk ami-l4n8vdny -k mykey17:07
crazedNotAuthorized: None17:07
crazedand i don't know where that's coming from, no mention of things in the log messages17:08
jonwoodeday, jaypipes: Have you seen mcollective?17:09
jonwoodIt could be a good fit for distributing state to each node.17:09
alekibangojonwood: i plan using ti later :)17:09
alekibangojonwood: no, not for distributing state17:10
*** daleolds has quit IRC17:10
alekibangobtw nova uses AMQP, as mcollective does17:10
edayjonwood: I've read about it, but not too deeply17:10
gholtroamin9: ECONNREFUSED indicates the auth server probably isn't running. When you do a 'ps axf | grep auth-server' does it find anything?17:11
alekibangojonwood: i would love to have python alternative :))17:11
dendrobatescrazed: did the instance get created anyway?17:11
jonwoodeday: It's main feature is the ability to aggregate data from multiple nodes.17:11
jonwoodalekibango: I'm sure you're quite welcome to write one.17:11
alekibangojonwood: it shoud be simlpe, if you have inexpensive array of redundant engeneers17:12
alekibangojonwood: i would hate having to learn ruby only to use it with mcollective :)17:12
jonwoodOk. Don't use mcollective then.17:13
alekibangoi might be forced to. hundreds of machines are hard to manage ;)17:14
jaypipesjonwood: no, haven't... I'll check it out.17:15
crazeddendrobates: no17:15
crazednothing ever goes into the compute logs17:15
roamin9gholt: hi,  when i run 'ps axf | grep auth-server' , there is no pid17:16
alekibangojaypipes:  http://marionette-collective.org/17:16
gholtroamin9: Okay, when you run 'swift-auth-server /etc/swift/auth-server.conf' is there any output?17:17
alekibangojaypipes: btw i also added my comment  to the article containing allegations of hijack...do you agree with me? :)17:19
roamin9gholt: yes, there is some error17:20
*** maplebed has joined #openstack17:21
rluciocrazed: actually i just noticed the same thing yesterday, the compute and volume logs don't seem to have any content even with --verbose17:21
roamin9gholt: [pipeline:main] section in /etc/swift/auth-server.conf is missing a 'pipeline' setting17:22
crazedthey did have logs when i was getting errors regarding "network" objects17:22
crazednow it's like nothing is happening17:23
vishycrazed: nova-manage project quota17:23
crazedinstances is set to 1017:23
vishyyeah you need to up instances and cores most likely17:24
crazedcores is 2017:24
vishynova-manage project quota instances 100017:24
vishyer project quota <project> instances 100017:24
vishythat is17:24
vishyand same for cores17:24
gholtroamin9: Cool, fix that conf file up and you that should be fixed. I assume you're doing SAIO? If so, a sample conf is in the instructions at http://swift.openstack.org/development_saio.html17:25
crazedi upped them both to 1000, floating ips and volumes are still at 1017:25
crazedstill getting NotAuthorized: None17:25
roamin9:gholt ^^  it's my error, i Misspelled pipeline. thank u very much17:27
vishycrazed: sorry that is not for not authorized, did you create your user as an admin?17:27
vishythat was a response to an eariler question you posted17:27
crazedyeah.. i tried to fix that and now i'm at this17:27
crazedbut yes my user is admin17:27
crazed| 2010-10-19 16:51:05 | NULL       | NULL       |       0 | crazed | NULL | admin      | c3d12183-ad5e-41fd-b369-a6b77c5510f7 |        0 |17:28
vishynupe he isn't17:28
crazedlast colm is is_admin17:28
vishywoah last column is secret i think17:28
crazedthere we go17:28
vishythat doesn't look right17:28
crazed| created_at          | updated_at | deleted_at | deleted | id     | name | access_key | secret_key                           | is_admin |17:28
crazed| 2010-10-19 16:51:05 | NULL       | NULL       |       0 | crazed | NULL | admin      | c3d12183-ad5e-41fd-b369-a6b77c5510f7 |        1 |17:29
vishyyour access key is admin17:29
crazedwe have lift off17:29
crazedno idea if the networking is right though17:29
crazedit's runningn but yeah the ip is definitely wrong17:31
crazedit's using an ip that is assigned to one of my compute nodes17:31
*** burris has quit IRC17:31
vishyare you using flat networking?17:31
vishyoh really?17:32
crazedyeah, flat networking wouldn't work for me17:32
vishywhen you created networks with nova-manage, you must have used a range that was in use?17:32
vishythe networks are for instances, not for infrastructure17:32
crazedi didn't use nova-manage to create networks17:33
crazedthis is the ubuntu archive version and i don't see nova-manage networks as an option17:33
crazedbut hold on i have a conf call.. gr17:33
vishycrazed: ah old version creates them from flags, so you need to set private_range flag (or possibly fixed_range, i'm not sure how long ago the flag was renamed)17:34
Ryan_Laneyeah. the archives version, at least for lucid, don't have that option in nova-manage17:34
Ryan_Laneyou also can't terminate instances in the archive version :)17:34
rluciovishy: or flat_networking_ips17:36
vishyrlucio: yeah, although he isn't using flat17:36
vishyRyan_Lane: we need a new build :)17:36
Ryan_Lanewould be nice :)17:37
vishysoren: are you here? i have a bug in our version and I want to verify it with you before i post it and maybe discuss the best fix.17:37
Ryan_Laneeasier to test multinode with the packages17:38
alekibangovishy: nova-manage is not documented at all, who would be right one to write some manual?17:38
vishywe're currently doing our multinode via custom packages and puppet17:38
Ryan_Lanethat's what I'm hoping to do17:39
Ryan_Laneminus the custom packages17:39
vishyi'd like to use chef though, but we can't use the opscode platform17:39
alekibangovishy: could you please go public with your multinode puppet and packages? :)17:39
vishyalekibango: at the very least we'll make some blog posts17:39
alekibangovishy: great17:39
alekibangoand nova-manage manual?17:40
vishyalthough virtually all of the commands have docstrings17:40
vishyall the ones i wrote do anyway17:40
vishyand the docstrings display when you enter command without options17:40
*** irahgel has left #openstack17:41
*** Grizzletooth has joined #openstack17:42
*** Ryan_Lane is now known as Ryan_Lane|food17:45
rlucioalekibango: i thought someone yesterday was saying they were making chef recipes out of the multinode instructions we posted17:46
jonwoodrlucio: That's me.17:46
rlucioah right :)17:47
jonwoodIt's in progress, but paying work comes first ;)17:47
*** gzmask has joined #openstack17:48
*** johnbergoon has joined #openstack17:50
gzmaskis this project exist to make image migration between cloud providers possible?17:51
alekibangogzmask: not yet17:51
alekibangowell, depending on what you mean17:51
gzmasksomewhere down the road?17:51
alekibangoyou can use something like deltacloud or libcloud later17:51
alekibangowhen it will have nova support17:52
jdarcyIt makes setting up your own cloud provider easier.  Moving work between them is kind of a different problem.17:52
alekibangoto do that from the point of user17:52
jdarcyalekibango: I expect we'll be working on that Very Soon.17:52
alekibangobalancing resources between providers is not there :)17:53
alekibangobut might make sense17:53
gzmasksee... what's the different between this and Eucalyptus?17:53
jdarcyScheduling work across different providers is really pretty challenging.17:53
*** grizzlet1oth has joined #openstack17:54
gzmaskI know that this is done with python and Eucalyptus is done with java17:54
alekibangojaypipes: but might be interesting project17:54
jdarcyYou have to account for differences not just in the costs, but in how the costs are even expressed (e.g. instances defined by CPU vs. instances defined by memory, with different performance balances).17:54
*** silassewell has joined #openstack17:54
alekibangoyes, there would be need to harmonize accounting :)17:55
jdarcyThen the numbers all change on you, you have to reason about data locality, different people/groups having different budgets or rights/credentials on different providers, etc.17:55
zykes-jdarcy: wondering, what setup do you recommend for hosts with SAN ?17:55
alekibangojdarcy: still, this is done in transportation and logistics :)17:55
*** BK_man has joined #openstack17:56
alekibangojdarcy: how do you store nova images?17:56
jdarcyzykes-: Um, I generally don't recommend hosts with SAN.  What are you actually trying to do?17:56
*** Grizzletooth has quit IRC17:56
zykes-jdarcy: sorry the cluster nodes17:56
zykes-have SAN17:56
alekibangojaypipes: we talked about  HA iamge storage for nova17:56
alekibangoexploring the space17:57
*** rnirmal has joined #openstack17:57
jdarcyalekibango: Haven't decided yet how to store Nova images.  At one level, it's Just Another Back End for iwhd.  It just needs to talk to whatever get/put API exists for manipulating images "from outside" or have an agent "inside" if that's not possible.17:58
jdarcyalekibango: Never really looked at Nova myself, so I don't know yet what that'll look like.17:59
jdarcyalekibango: iwhd = Image WareHouse Daemon.  The storage component of Deltacloud:NG.17:59
alekibangoafaik nova stores disk images for kvm as files in directory17:59
alekibangofor the living instances17:59
jdarcyThen as long as we can get to that directory (e.g. if it's mountable via NFS/GlusterFS/etc.) then we can probably just use the filesystem back end.18:00
alekibangoand will use glance  for cold image storage and registry18:00
alekibangojdarcy: so, pnfs, glusterfs, ceph,  or RAID10 only  ??18:00
jdarcyRight, for Glance we'll just use that API.  We could also *provide* that API, but that's a whole different discussion.  ;)18:00
*** westmaas has quit IRC18:01
rlucioalekibango: yes by default it running instances are files in /var/lib/nova/instances/instance-<id> but nova-volume can be setup to use AOE instead18:01
alekibangorlucio: i need to read nova-volume :)18:01
alekibangowhat nova-volume does?18:01
alekibango(what else)18:02
*** krish has joined #openstack18:02
rluciojust that, it allocates local filespace for vms or AOE space and manages them18:02
* alekibango is reading src. its short18:02
*** grizzlet1oth is now known as grizzletooth18:03
*** kevnfx has quit IRC18:05
*** grizzletooth has quit IRC18:06
*** grizzletooth has joined #openstack18:06
rluciooh i take back what i said :) looks like nova-compute allocates disk files, and the volume manager is like EBS volumes via aoe18:06
uvirtbotNew bug: #663410 in nova "Ips aren't set to leased properly if nova-network node is down or restarted" [Undecided,New] https://launchpad.net/bugs/66341018:07
rluciofoot --> mouth18:07
*** burris has joined #openstack18:07
*** grizzletooth has quit IRC18:07
*** grizzletooth has joined #openstack18:07
vishyrlucio: that is correct18:08
vishynot recommended though18:08
rluciovishy: heh, thanks for confirming ... what is not recommended?18:08
vishyaoe modules have been causing kernel panics18:08
vishyand destroying raid configurations on reboot18:08
rluciooh yea, i've heard some .. ah.. not great things about aoe scaling18:09
vishynot to mention it is incredibly slow18:09
vishy:( :(18:09
vishygoing to propose iscsi post-austin18:09
vishyit may be ok in maverick18:10
vishywe are using lucid and it has caused way too many headaches18:10
rluciothat seems to be the standard way of doing it, afaik18:10
creihtmtaylor: ok back to cutting a tarfile18:10
rlucioiscsi i mean18:10
mtaylorcreiht: yes. so I asked dendrobates a question but he ignored me :)18:11
creihtdendrobates: do we have a preferred methond for cutting RC packages?18:11
vishyrlucio: yes iscsi works much better18:11
dendrobatesmtaylor: sorry.18:11
dendrobatesmtaylor: it is on my list of things to figure out today18:11
mtaylorcreiht: my vote is to just release 1.1.0 and mention that it's an RC and then when you have changes release a 1.1.1 and at some point say "hey, 1.1.4 is good"18:11
mtaylordendrobates: cool18:11
creihtmtaylor: that sounds lame :)18:12
dendrobatesI was thinking of tagging austin-rc and tarring from there.18:13
mtaylordendrobates: so my question is more on what to version the tarball18:13
creihthow does tagging work with lp?18:13
dendrobatesyeah, I want to move away from version numbers and towards dates, for releases18:14
mtaylorlp has nothing to do with tagging - it's all a bzr level thing18:14
mtaylordendrobates: that's fine with me, I prefer dates18:14
* creiht dislikes dates as release versions18:14
dendrobatescreiht: why?18:14
creihtit seems like most people who do go back to normal versioning18:15
dendrobatesI don;t like version numbers because they have built in connotations unrelated to your software.18:15
alekibangodendrobates: +118:16
alekibangothat depends on development model18:16
dendrobatesWhat  I want to do is avoid the version 1.0 controversy in Nova18:16
creihtdendrobates: that is only true durring the adolescent stages of a new project18:16
vishymtaylor: any thought on hudson/tarmac stripping whitespace?  Or should we just start yelling at developers and tell them to configure their dev env properly?18:16
dendrobatescreiht: which we are in.18:16
creihtand which we will get out of :)18:16
mtaylorvishy: oh, well, sorry. totally forgot. :)18:17
dendrobatesbut you still have the x.0 issue where people treat any x.0 like it's a beta18:17
dendrobatesthanks to MS18:17
mtaylorvishy: I _WOULD_ definitely start yelling at devs to configure their dev env properly18:18
creihtand that is a bad thing? :)  google seems to get by with everything being beta forever :)18:18
vishymtaylor: of course that is about as valuable as yelling at a brick wall :)18:19
arthurcdoes someone test this workaround : http://etherpad.openstack.org/NovaMultinodeInstall ?18:20
mtaylorvishy: hehe18:20
uvirtbotNew bug: #663421 in swift "The SAIO documentation numbering scheme is crazy" [Undecided,New] https://launchpad.net/bugs/66342118:21
vishyarthurc: workaround?18:22
vishyalekibango: where do you live?18:22
arthurchumm, how to18:23
arthurcvishy: I think he lives in Europe, he has the same hour as me18:23
alekibangovishy: cz (eu)18:23
vishyalekibango: cool.  The work on documentation is much appreciated, btw.18:24
arthurcalekibango: I'm testing right now your "pad"18:24
alekibangovishy: i am trying to learn it by helping, it works better than just by waiting :)18:25
vishyalekibango: do you have a specific project in mind afterwards? Or you just like the technology?18:25
alekibangomy customer has few hundreds of servers and he wants to evolve to easier, manageable system...18:26
vishyalekibango: gotcha18:27
*** roamin9 has quit IRC18:32
*** jc_smith has joined #openstack18:33
*** Kdecherf has quit IRC18:36
alekibangovishy: and i would also like to prepare some cloud of DESKTOPs :)18:37
*** Kdecherf has joined #openstack18:37
alekibangothat might sound strange, but it will work :)18:38
crazedvishy: thanks for the information18:45
uvirtbotNew bug: #663437 in nova "The root drive for an instance is always the size of the image instead of being resized" [Undecided,New] https://launchpad.net/bugs/66343718:46
*** daleolds has joined #openstack18:48
*** kashyapc has quit IRC18:49
netbuzzmehey Vishy18:49
netbuzzmethis is Ajey Gore from ThoughtWorks18:49
netbuzzmewe are trying to get Nova working on Centos18:50
netbuzzmeand we are still stuck with problems on network virtualization part18:50
alekibangonetbuzzme: is something unusual in logs?18:50
netbuzzmedo you guys suggest that we should run it on Centos or we should move to ubuntu18:50
alekibangonetbuzzme: nova devels are using ubuntu18:51
alekibangoi use debian rather :)18:51
alekibangoand source works for me18:52
alekibango(when configured)18:52
netbuzzmei am badly stuck at nova actually one python module libxml2 forpython2.6 on centos5.518:52
netbuzzmewe at ThoughtWorks use CentOS and Redhat mainly18:52
netbuzzmeand thats the reason we are trying with CentOS18:52
netbuzzmeso.. thats why18:52
alekibango2.6 is well needed right now18:52
vishyi'm sure it will be possible, but it may require a lot of custom hacking18:53
*** rnirmal has quit IRC18:53
vishylibxml is barely used iirc18:53
netbuzzmewe had to install different version for python18:53
*** rnirmal has joined #openstack18:53
netbuzzmeanyway... I will get back to you on this - let me re-run my code18:53
netbuzzmedo you know if someone doign rpm packaging for Openstakc18:54
netbuzzmeif not then I can put some efforts into that18:54
alekibangonetbuzzme: there were some18:54
jdarcyLet me find that Bugzilla ticket.18:54
alekibangofedora might be first :) as it has dependencies18:54
mtaylornetbuzzme: ubuntu++18:54
netbuzzmemtaylor: lots of enterprises are not moved to ubuntu yet18:54
mtaylornetbuzzme: I know ... makes me sad18:55
mtaylornetbuzzme: in my other project, centos5 is my largest constant nightmare18:55
netbuzzmemtaylor:ldap over tls did not work for us on ubuntu, we integrate with active directory18:55
jdarcyHere's one for Swift.  https://bugzilla.redhat.com/show_bug.cgi?id=61763218:55
uvirtbotbugzilla.redhat.com bug 617632 in Package Review "Review Request: openstack-swift - OpenStack Object Storage (swift)" [Medium,Closed: errata]18:55
mtaylornetbuzzme: hrm. interesting. that's quite annoying :)18:55
jdarcyDunno about Nova, though.18:55
netbuzzmemtaylor:yeah, and actually if we move to ubuntu, we need to change a lot of stuff18:55
netbuzzmemtaylor:since I am head of global IT there, I can influence it here, but it won't work unless we get TLS binding over ldap working with AD and MSSFU on ubuntu, sadly it does not work right now18:56
netbuzzmemtaylor:so we need to be with CentOS for now18:56
netbuzzmevishy:I got your contact from Jonathan at Rackspace18:57
netbuzzmevishy:thats why I thought I should ping you18:57
netbuzzmevishy: anyway we will start effort on packaging Nova for CentOS18:58
vishyvishy: awesome, unfortunately we haven't done anything with centos18:59
vishynetbuzzme: ^^18:59
vishyi talk to myself sometimes :p18:59
vishynetbuzzme: it would be awesome to have a working package though19:00
creihtmtaylor: so if I'm just uploading a tarball, is there anything preventing me from making the rc change in setup.py locally and building the tarball and uploading?19:01
creihtwithout checking in the rc change to setup.py19:01
creihtmtaylor: ugh... I'm still not an admin for lp:swift :/19:02
dendrobatescreiht: really, I thought I added you.19:02
dendrobateslet me check19:02
netbuzzmevishy: okay19:03
dendrobatescreiht: what is it not letting you do?19:04
mtaylorcreiht: I'mnot either19:06
mtaylordendrobates, creiht: lp:swift is a branch owned by hudson19:06
creihtdendrobates: I don't see links that I used to a while back19:06
creihtis what I'm referring to19:06
mtaylorcreiht: ah - that's different :)19:06
*** krish has quit IRC19:06
creihtI also didn't see myself in the OpenStack Administrators group (which is listed as the maintainer)19:07
dendrobatesso the project,I'm fixing that.  I thought what I did last time fixed it.19:07
dendrobatesjust a sec19:07
dendrobatescreiht: try it  now19:09
creihtdendrobates: ok that looks better :)19:10
creihtThe layers of obfuscation of groups within groups gets confusing19:11
zykes-jdarcy: what i was wondering is that my company has a compellant san that we use on the compute nodes, can i use that for storage ?19:12
dendrobatesvishy: I am getting a traceback from nova-objectstore on trunk: http://paste.openstack.org/show/66/19:12
dendrobatesvishy: this is new for me.  before I dig in, have you seen this?19:13
*** Ryan_Lane|food is now known as Ryan_Lane19:14
vishywrong version of boto19:14
vishyyou need 1.9b19:15
vishyor that happens19:15
vishygoing to lunch, see you at the meeting19:15
*** vishy is now known as vishy-afk19:15
zykes-what's the ppa for nova ?19:16
*** nielsh has joined #openstack19:18
*** westmaas has joined #openstack19:19
dendrobateshmm python-boto | 1.9b-1ubuntu3 | http://archive.ubuntu.com/ubuntu/ maverick/main amd64 Packages19:22
jdarcyzykes-: I think that question is better answered by someone else here.19:22
*** nielsh has left #openstack19:22
dendrobateszykes-: https://launchpad.net/~nova-core/+archive/ppa19:25
crazeddamn all that work on openstack and i had to rip it down hahah19:25
crazedoh well i like what i see a lot19:25
crazedthe configuration files are a bit.. messy but that's not a big deal19:25
* alekibango falls asleep, gn8 for now19:26
creihtugh... I can't seem to figure out how to get my pgp key up to lp :/19:27
creihtdendrobates: yeah but when I try to import, it says that it can't19:28
*** dabo has joined #openstack19:28
jaypipesalekibango: sorry, stepped away for a bit...19:29
dendrobatescreiht:  to #launchpad and scream :)19:29
crazedi uploaded my key awhile back19:29
crazedbut i have NO idea where it is19:30
crazedprobably lost forever19:30
jaypipesjdarcy: "how to store Nova images"?  Nova doesn't yet have its own image format, so no worries there ;)19:30
dendrobatesmtaylor: the ppa's have not rebuilt in 2 weeks.19:30
creihtthere we go19:31
*** joearnold has quit IRC19:31
jdarcyjaypipes: Good, I'll just implement a back end that write's to /dev/null.  I'll call it the "web scale" back end.19:31
jaypipesjdarcy: well played.19:32
*** al-maisan is now known as almaisan-away19:32
* jaypipes cat fedora > /dev/null19:32
creiht/dev/null is webscale19:32
jdarcyBut seriously, whenever you have a format/API I expect we'll add a back end for it.19:32
jaypipesjdarcy: yup, will do of course!19:32
jaypipesjdarcy: can I count on your input on the ML about the new image format? would be great to get...19:33
jdarcyI'd love to throw Fedora in /dev/null.  Just had two F14 VM installs fail today, apparently because it can't handle having <1GB.19:33
crazedyeah you need more than a gig of memory to get the installer running19:34
jaypipeseday: alrighty, let's get the rumble on in here about distributed data store! ;P19:34
jdarcyjaypipes: Sure, I'll take a look.19:34
edayjaypipes: hehe.. just wanted to know how one would configure it the way your comments describe in the etherpad :)19:35
arthurcjdarcy: I've installed F14 alpha on kvm with 512MB...19:35
edayjaypipes: because I don't think it's currently possible19:35
jaypipeseday: actually, I've been looking at how swift does their sqlite replication and I think for our data sizes (local to nodes, of course) that might work...19:36
creihtmtaylor: so how do I upload a tarball for the 1.1.0 rc?19:36
jaypipeseday: and then for places where we *do* need aggregation or a very large dataset, we can use a "beefier" RDBMS.19:37
jaypipeseday: like postgresql.19:37
*** dysinger has quit IRC19:37
creihtthe only way I have found wants to upload for the 1.0 series19:37
jaypipeseday: so, right now, whatever node runs the various daemons in nova/bin/ will create a set of data in sqlite that corresponds to what the daemon does in the grand scheme of things.19:38
edayjaypipes: well, the API server needs access to those as well, and it will be the one creating them first19:39
mtaylorcreiht: go to the 1.1.0 milestone, and you'll see a link "create a release"19:39
mtaylorcreiht: there you can release that milestone and then upload files to that release19:39
mtaylorcreiht: releases are always the result of milestones19:39
jaypipeseday: and my point on the etherpad was that we currently have 1) no tests for the data distribution across multiple nodes, and b) no plan for cross-node-datastore IntegrityError raising through the ORM.19:39
mtaylorso - what really probably wants to happen here is that you probably want to make a 1.1.0-RC milestone, and then release that19:40
Ryan_Lanenetbuzzme: regarding tls binding to AD: do you have other things that do this properly? I've never gotten TLS binding working with AD, for anything19:40
jaypipeseday: so, while I like the etherpad discussion, it's missing some key details on *where* each *specific table* will reside.19:40
Ryan_Lanenetbuzzme: I'm not sure AD supports it19:40
edayjaypipes: 100% agree on the testing, but I don't see how one would configure the current code to be distributed (our first couple paragraphs)19:41
Ryan_Lanenetbuzzme: ldaps does generally work, though19:41
edayjaypipes: because each worker node does not have it's own data store. they each access it, but it needs to be shared with the API servers, so it's central19:41
creihtmtaylor: hehe.. now the -rc milestone is after the 1.1.0 milestoner19:41
jaypipeseday: in the default configuration, if someone runs bin/nova-api on one node, bin/nova-network on another, and bin/nova-compute on another, the data will be spread among all 3 nodes by default.19:41
mtaylorcreiht: hehe. well, that can probably change by adjusting the 1.1.0 target date19:42
edayjaypipes: but nothing will work :)  api server dumps all config into the db, and sends a message with a reference to the record in the DB19:42
jaypipeseday: you and I know nothing will work.  but the test suite doesn't ;)19:42
edayjaypipes: if the worker can't access th same DB as the api, it can't do much19:42
creihtthere we go19:43
jaypipeseday: if we implemented the sqlite replication that is in swift, each node would keep a copy of the same data...19:43
* jaypipes like creiht's sqlite replication...19:43
edayjaypipes: yep, agree on the testing, but that's another discussion19:43
jaypipeseday: ok, shelve the testing part of the discussion...19:44
creihtjaypipes: well redbo wrote most of that19:44
jaypipescreiht: it's ok, just take credit for it.19:44
edayjaypipes: well, sqlite replication may not be enough... because the API servers and scheduler workers need aggregates of all the data in a single DB, where each worker is just what hat node cares about19:44
jaypipeseday: yes, that is all true.19:44
edayjaypipes: so, SQLite replcation *may* be the answer, and that's something to consider during the summit discussion :)19:45
jaypipeseday: which is why I said above we could use a "beefier" rdbms for that aggregation stuff.  The problem with that, of course, is keeping the data in sync.  Or, we just say "on conflicting data, use the local data store on the node"19:45
edayjaypipes: but it sounds like your comments in the etherpad imply that's possible right now, and I don't understand how :)19:45
jaypipeseday: I meant the following: "if you install the nova binaries on different machines and don't change the default (of having a local sqlite data store), nothing will work, as all the data will be distributed."19:46
edayjaypipes: yup, agree. So my proposal is to figure out how to aggregate/sync that data basically.19:47
jaypipeseday: so I suppose I said it weirdly... :)19:47
jaypipeseday: ++19:47
rlucioprobably distributed is the wrong word there :)19:47
jaypipeseday: and what is your proposed solution?  or are you just saying "let's get a solution"?19:47
jaypipesrlucio: true ;)19:47
edayjaypipes: I guess I had the assumption that "currently in a WORKING nova system... this is how it works." :)19:47
Ryan_Laneis the worry with a database like mysql the lag replication time?19:48
jaypipeseday: ah, let me correct the etherpad then19:48
Ryan_Laneerr replication lag time19:48
jaypipesRyan_Lane: you'll always have some sort of lag unless you use some sort of real-time database or something like MySQL Cluster.19:48
creihtmtaylor: woot... tarball cut, can you update the package stuff?19:48
jaypipesRyan_Lane: regardless of whether you use a sync process or not...19:48
edayjaypipes: I don't have a specific solution, that's something for discussion. I just wanted the problem stated with some ideas of how to solve, then we can hash ou details at summit. But the general solution is to have each worker keep it's own private store and somehow each one will push updates to api/schedular. This may be within the DB, this may be within the app for more context (via msg bus)19:49
Ryan_Lanewe solve that at wikimedia by using masters for reads for requests that *must* be up to date19:49
jaypipeseday: ++ ok, I'll remove my thoughts on the etherpad then, as they were oddly stated (except for the testing issue mentions)19:50
edayjaypipes: yeah, keep the testing for sure :)19:50
Ryan_Lanequestion is, for most queries, does it matter if the data is completely up to date?19:50
jaypipesRyan_Lane: yup, but there's still *some* lag ;)19:50
Ryan_Laneright. whether that matters or not is the question19:50
jaypipesRyan_Lane: low value data and high value data... treated differently lots of times, and that's fine by me! :)19:50
Ryan_Lanethe master will always be up to date19:51
Ryan_Lanethat's one benefit of using master/slave instead of master/master19:51
jaypipeseday: updated. lemme know if that little paragraph makes more sense...19:52
mtaylorcreiht: yup19:52
jaypipesRyan_Lane: ya, agreed.19:52
jaypipesRyan_Lane: but then you're not looking at replicated data are you? ;)19:53
edayjaypipes: makes sense19:53
jaypipeseday: k19:53
Ryan_Lanewhat data needs to be replicated19:53
jaypipeseday: sorry for the confusion! :)19:53
jaypipesRyan_Lane: you said above that you worried about "replication lag in MySQL" and I'm saying if you're hitting the master, you're not hitting any replicated data...that's all.19:54
Ryan_Lanethat is handled by the filesystem, right?19:54
creihtmtaylor: also look over the release stuff to make sure I did it right (if you don't mind)19:54
Ryan_Lanewait. yes. you are hitting replicated data19:54
edayjaypipes: np :)  was just hoping I didn't miss some big patch that made that possible19:54
Ryan_Laneeveryone writes to the master, and the master replicates the data to the slaves. you read from the master directly to ensure that you avoid lag time.19:55
jaypipesRyan_Lane: right, so you're not reading the replicated copy of the data.19:55
Ryan_Lanequeries that aren't time sensitive go to the slaves19:55
jaypipesRyan_Lane: do you use memcached as well in between for other data?19:56
Ryan_Lanewe fairly heavily use memcached19:56
zykes-memcached for what ?19:56
Ryan_Lanemediawiki object caching19:57
*** metoikos has joined #openstack19:57
Ryan_Laneuser sessions, other things19:57
*** _morfeas has quit IRC19:57
*** joearnold has joined #openstack19:58
Ryan_Laneanything that is expensive that is cheaper to cache than to recreate :)19:58
creihtYou guys need to think more distributed :)19:58
Ryan_Lanewe had 114k req/s today at peak19:59
Ryan_Lanewe have like 350 servers give or take19:59
*** morfeas has joined #openstack19:59
creihtRyan_Lane: how much of your traffic is read vs. write?20:02
*** joearnold has quit IRC20:02
Ryan_Lanenearly all is read20:02
*** burris has quit IRC20:03
*** joearnold has joined #openstack20:05
*** ctennis has quit IRC20:06
*** burris has joined #openstack20:07
*** morfeas has quit IRC20:08
*** westmaas has quit IRC20:11
*** jakedahn has joined #openstack20:12
*** burris has quit IRC20:15
*** arthurc has quit IRC20:16
*** ctennis has joined #openstack20:19
*** ctennis has joined #openstack20:19
dendrobateswow, I was confused.  pip installed boto 2.0b1 in /usr/local/python on my system which does not work with nova20:22
*** pvo has joined #openstack20:26
*** pvo has joined #openstack20:26
*** ChanServ sets mode: +v pvo20:26
*** dendrobates is now known as dendro-afk20:27
vishy-afkeday: catching up on scrollback, one possibility we've been discussing is keeping data central but just giving all of the workers read only access20:31
*** vishy-afk is now known as vishy20:31
*** allsystemsarego has quit IRC20:31
vishyeday: and pushing updates from the workers through the queue, only having one place that actually writes to the db20:31
vishyeday: this is more for security, not allowing a compromised worker compromise the entire system, but it could be very useful for scaling as well20:32
vishybecause the workers could just have read slaves of the db.20:32
*** johnpur has joined #openstack20:34
*** ChanServ sets mode: +v johnpur20:34
edayvishy: security wise that would be ok, but I wonder about scalability...20:34
*** dendro-afk is now known as dendrobates20:35
*** rnirmal has quit IRC20:36
Ryan_Lanefrom a scalability point, that would be a major plus20:37
*** silassewell has quit IRC20:37
Ryan_Laneyou can load balance your reads20:37
Ryan_Laneas long as no reads are time sensitive, that's a good idea20:37
edayRyan_Lane: well, I don't think reads are going to be the bottleneck here, at least from the workers. THe workers will mostly be doing writing (state changes, creation, ...)20:39
*** perestrelka has quit IRC20:39
edayRyan_Lane: the API front end will be more read heavy though20:39
Ryan_Lanevishy was recommending read-only access for the workers20:39
Ryan_Laneah. pushing through the queue20:40
Ryan_LaneI knew I must have been missing something :)20:40
edayRyan_Lane: yeah, write via confirmed queue msg :)20:40
vishyeday, Ryan_Lane: some sort of cluster agregation will probably still be necessary, one writer per cluster, perhaps?20:41
vishylots to discuss at the summit for sure20:41
Ryan_Lanewhy the need for more than one master?20:41
edayvishy: yeah.. I was thinking scheduler per cluster, and maybe the sched processes manage all the writes20:41
Ryan_Laneredundancy across datacenters?20:41
edayvishy: and then API servers would aggregate schedulers/clusters20:42
vishyRyan_Lane: our goal is to support 1 million host machines20:42
Ryan_Laneah. ok. I can see why then :)20:42
vishy1 million hosts sending through the queue is probably a bottleneck20:42
creihtThere is a reason there are no queues in swift :)20:43
edaycreiht: well, probably not a traditonal message queue, but many things are a queue of some form (ie, replication stream) :)20:44
* creiht was referencing a traditional message queue20:45
Ryan_Lanemaybe break the hosts into sections, and have a queue per section?20:45
Ryan_Lanex number of hosts per section, and therefor per queue?20:45
edaycreiht: my point was queues are fine to use in large, distributed systems, if done right of course (whether they are traditional generic things or application-specific)20:47
*** burris has joined #openstack20:47
edaycreiht: in fact, they are kind of required :)20:47
edayRyan_Lane: Yeah, we had a number of discussions of how to break the clusters out at the last summit, as well as experience with how rackspace does it now20:48
edayRyan_Lane: need to work with network/switch/data center/... boundaries, which give natural partitioning20:48
*** cloudmeat1 has joined #openstack20:48
Ryan_Laneah, yeah. that makes a lot of sense20:49
vishyeday: right now we're running multiple api nodes with HAProxy in front20:50
pvoRyan_Lane: We're looking at things at a regional level, then trying to figure out how to logically segment it at the level eday was mentioning20:50
pvowe (rackspace) will have to do a lot of work around this soon20:51
vishyand monit managing them20:51
*** cloudmeat has quit IRC20:51
vishywe are still seeing greenthread duplicate reads occasionally20:51
vishywhich pretty much kills the api20:51
edayvishy: reads on what? DB? queue? ...?20:54
*** kevnfx has joined #openstack20:54
zykes-what's recommended20:54
zykes-MySQL or PostgreSQL ?20:54
dendrobatesmeeting in 5 min in #openstack-meeting20:55
zykes-is it okay to participate even though i'm not in a dev team ?20:56
creihtzykes-: indeed... it is open to everyone20:56
* zykes- joins20:57
dendrobatesit will be extra short today though20:57
dendrobatesthis time I really mean it20:57
creihtdendrobates: you are jinxing it just by saying that :)20:58
vishyvishy: not exactly sure where, we didn't have time to debug it, so the haproxy/monit was a workaround20:58
zykes-for a ten node setup should i do nova services on 3-4 standalone machines and then the rest compute - for documentational pruposes?20:58
vishyeday: ^^20:58
vishyi'm talking to myself a lot today :(20:58
edayvishy: hehe20:58
vishyeday: i think it was happening when a call would fail in the middle20:59
vishyrpc.call that is20:59
edayvishy: if you can get a stacktrace with the module, that would be good. Then we'd know where to pool the connections20:59
edayvishy: ahh, ok20:59
zykes-what's haproxy for ?20:59
vishyeday: if i see it again i'll try and get a stack trace20:59
*** pothos has quit IRC21:01
*** pothos has joined #openstack21:01
*** pothos has quit IRC21:03
*** pothos has joined #openstack21:04
*** morfeas has joined #openstack21:07
*** cloudmeat has joined #openstack21:14
*** ddumitriu has joined #openstack21:15
*** niks has joined #openstack21:15
nikshiya all21:15
vishyniks: hi, most of us are distracted in release meeting21:16
nikshi vishy..is it in the same channel or its on different channel? can i join?21:17
*** cloudmeat1 has quit IRC21:17
niksthanks vishy21:18
mtaylorcreiht: release looks good - for the debian package I'm going to need to set the version to 1.0.99+1.1.0rc1 because of the way version numbering works ... 1.1.0rc1 is treated as greater than 1.1.0, so if someone installs 1.1.0rc1 from the PPA, it would not be considered an upgrade to go to 1.1.021:21
mtaylorcreiht: it's a normal enough debian packaging thing to need to do - I just wanted to give you a heads up about it21:21
creihtmtaylor: ok that is good to know for future21:22
*** jrussellnts_ has joined #openstack21:23
*** jakedahn has quit IRC21:27
zykes-prefered to use for nova - mysql or pgsql ?21:29
grizzletoothzykes: mysql - easier to scale21:30
*** jdarcy has quit IRC21:35
zykes-so dendrobates, vmware support - anything that's been requested ?21:43
dendrobateszykes-: not on the horizon, but it someone writes it we'll take it.21:46
vishydendrobates: never mind on the soren bug21:46
vishydendrobates: looks like soren fixed it already and we just don't have it in deploy yet21:47
vishysneaky :)21:47
*** morfeas has quit IRC21:49
uvirtbotNew bug: #663551 in nova "OpenStack API does not give JSON/XML output for 500s" [High,New] https://launchpad.net/bugs/66355121:51
*** pvo_ has joined #openstack21:53
*** pvo_ has quit IRC21:53
*** pvo_ has joined #openstack21:53
*** ChanServ sets mode: +v pvo_21:53
*** DubLo7 has joined #openstack21:53
*** pvo has quit IRC21:56
*** jakedahn has joined #openstack21:57
*** pvo_ has quit IRC21:57
*** ppetraki has quit IRC22:01
*** iammartian has quit IRC22:01
*** mdomsch has quit IRC22:02
*** jakedahn has quit IRC22:03
*** netbuzzme has left #openstack22:05
uvirtbotNew bug: #663559 in nova "pip-depends lists incorrect python-boto version" [Critical,Confirmed] https://launchpad.net/bugs/66355922:06
*** miclorb_ has joined #openstack22:10
*** jakedahn has joined #openstack22:10
dendrobatesanyone care to review my 2 byte change:  https://code.launchpad.net/~dendrobates/nova/lp663559/+merge/3889222:10
_0x44I dunno... those look like expensive bytes.22:13
_0x44It's approved22:14
*** johnpur has quit IRC22:14
_0x44And with that, I'm going to bed. Night openstack kids.22:14
vishyfastest approval ever?22:14
vishydendrobates: the other bug/patch i submitted22:15
vishydon't know if it is critical for release22:15
vishyit interferes a bit with usability but it isn't exactly a bug22:15
vishys/bug/showstopper bug/22:15
*** Ryan_Lane has quit IRC22:18
dendrobates_0x44: are you in Italy?22:19
dendrobatesvishy: what is the bug #22:20
dendrobatesbug #66356822:21
uvirtbotLaunchpad bug 663568 in nova "Instance sizes aren't set to meaningful defaults" [Undecided,New] https://launchpad.net/bugs/66356822:21
dendrobatesthat one22:21
*** morfeas has joined #openstack22:21
uvirtbotNew bug: #663565 in swift "No port in default_cluster_url causes error" [Critical,New] https://launchpad.net/bugs/66356522:21
*** DubLo7 has left #openstack22:22
uvirtbotLaunchpad bug 663437 in nova "The root drive for an instance is always the size of the image instead of being resized" [Undecided,New]22:22
*** Ryan_Lane has joined #openstack22:22
dendrobatesit is annoying, not critical, but with a pretty simple fix22:24
vishyjust posted a fix to the 663568 as well22:24
dendrobatesvishy: is this something you have already fixed in production in Nebula?22:25
dendrobates663437  I mean?22:25
*** Ryan_Lane has quit IRC22:25
vishyyes both of those22:25
uvirtbotNew bug: #663568 in nova "Instance sizes aren't set to meaningful defaults" [Undecided,New] https://launchpad.net/bugs/66356822:26
*** tonywolf has quit IRC22:27
vishyi'm going through and trying to backport any bug fixes, and i think that is all of them22:27
dendrobatesis anyone else around to review? eday?22:28
*** jakedahn has quit IRC22:28
*** jakedahn has joined #openstack22:30
*** ArdRigh has joined #openstack22:30
gundlachi'll take a look22:31
*** jakedahn has quit IRC22:31
*** gzmask has quit IRC22:32
*** jakedahn has joined #openstack22:32
gundlachvishy: did you mean to remove c1.medium?22:33
gundlach(obviously you did as you moved the }, just checking that that was kosher)22:34
vishywe did cuz it seems a bit redundant22:34
*** jakedahn has quit IRC22:34
gundlach663568 approved22:34
*** jakedahn has joined #openstack22:35
gundlach663437 approved22:40
*** ianweller has joined #openstack22:40
*** kevnfx has quit IRC22:42
dendrobatesmtaylor: around?22:43
*** daleolds has quit IRC22:45
*** jrussellnts_ has quit IRC22:46
*** ddumitriu has quit IRC22:53
jaypipessirp, gundlach: fixes are in for https://code.launchpad.net/~jaypipes/nova/glance-image-service/+merge/3843322:58
*** cloudmeat has quit IRC22:58
gundlachthanks.  sirp approved 10 secs ago :)22:58
*** cloudmeat has joined #openstack22:58
*** cloudmeat has quit IRC23:03
*** kevnfx has joined #openstack23:04
gundlachjaypipes: approved.23:04
jaypipesgundlach: ah, cheers.23:04
*** masumotok has joined #openstack23:06
* dendrobates wanders off to have some dinner, will be back in a bit.23:09
*** kevnfx has quit IRC23:12
*** jonwood_ has joined #openstack23:16
*** pharkmillups has quit IRC23:19
*** jonwood has quit IRC23:19
*** pvo has joined #openstack23:22
*** ChanServ sets mode: +v pvo23:22
*** pvo has quit IRC23:27
*** rlucio has quit IRC23:34
*** jakedahn has quit IRC23:35
*** daleolds has joined #openstack23:36
*** cloudmeat has joined #openstack23:37
*** johnbergoon has left #openstack23:40
gundlachdendrobates, jaypipes, vishy: want to review https://code.launchpad.net/~gundlach/nova/lp663551/+merge/38900 which fixes bug #663551?23:42
uvirtbotLaunchpad bug 663551 in nova "OpenStack API does not give JSON/XML output for 500s" [High,Fix committed] https://launchpad.net/bugs/66355123:42
jaypipesgundlach: yup, will do.23:42
gundlachdendrobates: if you think this should go into Austin, please mark as Approved -- I'm not going to touch the Status.23:42
vishynot sure how useful a review from me is going to be, I haven't really looked at the openstack api at all23:43
xtoddxdendrobates: proposed fix for https://bugs.launchpad.net/nova/+bug/65334423:44
gundlachvishy: fine enough23:44
uvirtbotLaunchpad bug 653344 in nova "Image downloading should check project membership and publicity settings" [Critical,In progress]23:44
vishyxtoddx: you forgot ()23:45
vishyninja patch it quick23:45
*** pvo has joined #openstack23:46
*** pvo has quit IRC23:46
*** pvo has joined #openstack23:46
*** ChanServ sets mode: +v pvo23:46
*** pvo has quit IRC23:48
xtoddxa) i miss ruby, b) need more tests, i guess23:52
xtoddxvishy: where?23:54
vishyxtoddx: raise exception.NotAuthorized()23:54
*** sanjib has joined #openstack23:55
xtoddxnone of the raise NotAuthorized have them, should they all?23:55
vishyo rly23:55
vishyin theory you're supposed to construct the exception when you raise it yes23:56
*** joearnold has quit IRC23:56
xtoddxvishy: fixed that file all over, committing now23:57
vishycool thx23:58
*** jonwood_ has quit IRC23:59
*** metoikos has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!