*** shentonfreude has joined #openstack | 00:00 | |
*** dragondm has quit IRC | 00:01 | |
*** mattray has quit IRC | 00:01 | |
zns | NHM: how do you differentiate between users in the central directory and the ones in the local institute directory? | 00:02 |
---|---|---|
*** mattray has joined #openstack | 00:03 | |
*** shentonfreude has quit IRC | 00:05 | |
*** DigitalFlux has quit IRC | 00:06 | |
*** DigitalFlux has joined #openstack | 00:06 | |
*** DigitalFlux has joined #openstack | 00:06 | |
*** matiu has quit IRC | 00:08 | |
*** maplebed has quit IRC | 00:08 | |
*** shentonfreude has joined #openstack | 00:08 | |
*** enigma1 has quit IRC | 00:09 | |
*** jdurgin has quit IRC | 00:16 | |
*** miclorb has quit IRC | 00:19 | |
*** gregp76_ has joined #openstack | 00:23 | |
*** gregp76_ has quit IRC | 00:24 | |
*** DigitalFlux has quit IRC | 00:26 | |
*** gregp76 has quit IRC | 00:26 | |
*** DigitalFlux has joined #openstack | 00:26 | |
*** DigitalFlux has joined #openstack | 00:26 | |
*** jeffjapan has joined #openstack | 00:27 | |
*** zns has quit IRC | 00:29 | |
*** mattray has quit IRC | 00:30 | |
*** Dumfries has quit IRC | 00:31 | |
*** drogoh has quit IRC | 00:32 | |
*** drogoh has joined #openstack | 00:33 | |
*** kashyap has quit IRC | 00:34 | |
*** nelson has quit IRC | 00:42 | |
*** nelson has joined #openstack | 00:42 | |
*** Eyk has quit IRC | 00:44 | |
*** kashyap has joined #openstack | 00:45 | |
*** Ryan_Lane has quit IRC | 00:45 | |
*** scalability-junk has quit IRC | 00:46 | |
*** msivanes has joined #openstack | 00:53 | |
*** westmaas1 has joined #openstack | 01:01 | |
*** adjohn has joined #openstack | 01:03 | |
*** cid has joined #openstack | 01:12 | |
*** j05h has quit IRC | 01:13 | |
cid | hello...I got some problems with the command euca-run-instances | 01:14 |
cid | euca-run-instances $emi -k openstack -t m1.tiny | 01:15 |
cid | euca-run-instances -k test -t m1.tiny ami-tty | 01:15 |
cid | ImageNotFound: Image ami-tty could not be found | 01:16 |
cid | any help would be great | 01:17 |
*** anticw has quit IRC | 01:19 | |
*** anticw has joined #openstack | 01:19 | |
*** j05h has joined #openstack | 01:19 | |
winston-d | cid : what's the output of 'euca-describe-images' ? is 'ami-tty' in that output? | 01:21 |
cid | ami-tty is not in the list | 01:24 |
cid | I just follow the instructions for the script installation | 01:24 |
cid | followed* | 01:25 |
*** zigo-_- has joined #openstack | 01:29 | |
zigo-_- | alekibango: Hi there! | 01:29 |
*** jfluhmann has joined #openstack | 01:34 | |
*** DigitalFlux has quit IRC | 01:44 | |
*** DigitalFlux has joined #openstack | 01:44 | |
winston-d | cid : you should first upload some images then use them to start instance | 01:46 |
*** toluene has joined #openstack | 01:46 | |
*** vernhart has quit IRC | 01:46 | |
toluene | hi! Does anyone here install openstack from source ? I got a little problem. | 01:46 |
*** kaz_ has joined #openstack | 01:49 | |
*** DigitalFlux has quit IRC | 01:54 | |
*** DigitalFlux has joined #openstack | 01:54 | |
*** stewart has joined #openstack | 01:59 | |
*** dendrobates is now known as dendro-afk | 02:01 | |
*** miclorb has joined #openstack | 02:07 | |
*** lorin1 has joined #openstack | 02:17 | |
*** lorin1 has left #openstack | 02:17 | |
*** lorin1 has joined #openstack | 02:17 | |
*** DigitalFlux has quit IRC | 02:28 | |
*** DigitalFlux has joined #openstack | 02:29 | |
*** fantasy has quit IRC | 02:29 | |
*** larry__ has joined #openstack | 02:32 | |
*** Ephur has quit IRC | 02:32 | |
*** DigitalFlux has quit IRC | 02:32 | |
*** Ephur has joined #openstack | 02:32 | |
*** DigitalFlux has joined #openstack | 02:33 | |
*** DigitalFlux has joined #openstack | 02:33 | |
*** lorin1 has quit IRC | 02:36 | |
HugoKuo__ | cid : $emi = ami-tty by default , but you did not have any image yet .... | 02:36 |
*** gaveen has joined #openstack | 02:40 | |
*** cid has quit IRC | 02:41 | |
*** DigitalFlux has quit IRC | 02:42 | |
*** DigitalFlux has joined #openstack | 02:42 | |
*** DigitalFlux has joined #openstack | 02:42 | |
*** fantasy has joined #openstack | 02:42 | |
*** jakedahn has joined #openstack | 02:44 | |
*** cid has joined #openstack | 02:49 | |
*** DigitalFlux has quit IRC | 02:50 | |
*** DigitalFlux has joined #openstack | 02:51 | |
*** DigitalFlux has joined #openstack | 02:51 | |
HugoKuo__ | I found that nova-network is really hard to do HA :< | 02:52 |
*** Zangetsue has joined #openstack | 02:54 | |
*** bcwaldon has joined #openstack | 02:59 | |
*** dysinger has quit IRC | 03:05 | |
*** DigitalFlux has quit IRC | 03:07 | |
*** DigitalFlux has joined #openstack | 03:07 | |
*** DigitalFlux has joined #openstack | 03:07 | |
*** DigitalFlux has quit IRC | 03:12 | |
*** DigitalFlux has joined #openstack | 03:13 | |
*** DigitalFlux has joined #openstack | 03:13 | |
*** joearnold has joined #openstack | 03:13 | |
*** santhosh has joined #openstack | 03:13 | |
*** DigitalFlux has quit IRC | 03:16 | |
*** rchavik has joined #openstack | 03:17 | |
*** rchavik has joined #openstack | 03:17 | |
*** gaveen has quit IRC | 03:17 | |
*** larry__ has quit IRC | 03:29 | |
*** zns has joined #openstack | 03:31 | |
alekibango | hi zigo :) | 03:31 |
*** gaveen has joined #openstack | 03:37 | |
*** msivanes has quit IRC | 03:40 | |
*** joearnold has quit IRC | 03:40 | |
uvirtbot | New bug: #780276 in nova "run_tests.sh fails test_authors_up_to_date when using git repo" [Undecided,New] https://launchpad.net/bugs/780276 | 03:41 |
*** joearnold has joined #openstack | 03:42 | |
*** santhosh has quit IRC | 03:43 | |
*** Ephur has quit IRC | 03:47 | |
HugoKuo__ | morning | 03:48 |
*** vernhart has joined #openstack | 03:48 | |
HugoKuo__ | How's going today | 03:49 |
*** gaveen has quit IRC | 03:50 | |
*** bcwaldon has quit IRC | 03:52 | |
alekibango | HugoKuo__: :) waking up | 03:55 |
*** cid has quit IRC | 03:55 | |
HugoKuo__ | alekibango : It's really hot today in Taiwan.... | 03:57 |
alekibango | sun is coming up, birds singing, looks like it will be hot in CZ too | 03:57 |
*** joearnold has quit IRC | 04:01 | |
*** joearnold has joined #openstack | 04:04 | |
*** joearnold has quit IRC | 04:07 | |
*** bcwaldon has joined #openstack | 04:09 | |
*** joearnold has joined #openstack | 04:11 | |
*** masudo has quit IRC | 04:11 | |
*** joearnold has quit IRC | 04:15 | |
*** bcwaldon has quit IRC | 04:16 | |
uvirtbot | New bug: #780287 in nova "nova/scheduler/host_filter.py fails pep8" [Undecided,Fix committed] https://launchpad.net/bugs/780287 | 04:21 |
*** Pyro_ has joined #openstack | 04:25 | |
*** santhosh has joined #openstack | 04:27 | |
*** omidhdl has joined #openstack | 04:35 | |
*** med_out is now known as med | 04:36 | |
*** med is now known as medberru | 04:36 | |
*** medberru is now known as medberry | 04:36 | |
*** grapex has quit IRC | 04:39 | |
*** omidhdl has quit IRC | 04:42 | |
*** omidhdl has joined #openstack | 04:44 | |
*** f4m8_ is now known as f4m8 | 04:51 | |
*** sophiap has quit IRC | 04:53 | |
*** hagarth has joined #openstack | 05:08 | |
*** zenmatt has quit IRC | 05:08 | |
*** kraay has quit IRC | 05:19 | |
*** Ryan_Lane has joined #openstack | 05:20 | |
*** miclorb has quit IRC | 05:22 | |
*** zns has quit IRC | 05:29 | |
zigo-_- | alekibango: Woke up? | 05:54 |
*** vernhart has quit IRC | 05:56 | |
*** fantasy has quit IRC | 06:03 | |
*** mcclurmc_ has joined #openstack | 06:06 | |
HugoKuo__ | zigo-_-: yes he did | 06:09 |
*** fantasy has joined #openstack | 06:11 | |
*** guigui has joined #openstack | 06:13 | |
*** fantasy has quit IRC | 06:25 | |
alekibango | zigo-_-: now finished breakfast | 06:29 |
*** allsystemsarego has joined #openstack | 06:30 | |
*** allsystemsarego has joined #openstack | 06:30 | |
*** nacx has joined #openstack | 06:32 | |
*** fantasy has joined #openstack | 06:32 | |
*** johnpur has quit IRC | 06:34 | |
*** zul has joined #openstack | 06:34 | |
HugoKuo__ | playing with StackOps ... | 06:35 |
*** dendro-afk is now known as dendrobates | 06:36 | |
HugoKuo__ | mornitoring features is been added into Cactus Release ? | 06:39 |
zigo-_- | alekibango: I got Glance and Swift packages ready. | 06:40 |
zigo-_- | Now I need to configure Swift and Glance to work together. | 06:40 |
zigo-_- | Can you help? | 06:40 |
*** zul has quit IRC | 06:49 | |
*** s1cz has quit IRC | 06:50 | |
*** gaveen has joined #openstack | 06:53 | |
*** gaveen has joined #openstack | 06:53 | |
*** omidhdl has quit IRC | 06:55 | |
*** rds__ has quit IRC | 06:55 | |
*** omidhdl has joined #openstack | 06:58 | |
*** keds has joined #openstack | 06:59 | |
*** gaveen has quit IRC | 07:01 | |
*** rds__ has joined #openstack | 07:08 | |
*** lborda has joined #openstack | 07:09 | |
*** Pyro_ has quit IRC | 07:12 | |
*** arun_ has joined #openstack | 07:13 | |
*** s1cz has joined #openstack | 07:13 | |
*** Beens has quit IRC | 07:16 | |
*** zul has joined #openstack | 07:16 | |
*** mgoldmann has joined #openstack | 07:19 | |
*** mgoldmann has joined #openstack | 07:19 | |
*** nerens has joined #openstack | 07:27 | |
*** krish|wired-in has joined #openstack | 07:28 | |
*** fantasy has quit IRC | 07:28 | |
*** zaitcev has quit IRC | 07:32 | |
*** obino has quit IRC | 07:34 | |
*** fantasy has joined #openstack | 07:45 | |
*** kraay has joined #openstack | 07:46 | |
*** fantasy has quit IRC | 07:50 | |
*** HugoKuo__ has quit IRC | 07:53 | |
*** HugoKuo has joined #openstack | 07:53 | |
*** perestrelka has quit IRC | 07:54 | |
*** daveiw has joined #openstack | 07:56 | |
*** fantasy has joined #openstack | 07:57 | |
*** lborda has quit IRC | 07:58 | |
*** zul has quit IRC | 07:58 | |
*** fantasy has quit IRC | 08:01 | |
*** zul has joined #openstack | 08:01 | |
*** jeffjapan has quit IRC | 08:01 | |
*** radek has joined #openstack | 08:03 | |
radek | hi If I have instance of windows image running after I made changes to OS is there a way to save it as new image ? | 08:05 |
radek | how would you do it ? | 08:05 |
*** toluene has quit IRC | 08:05 | |
*** infinite-scale has joined #openstack | 08:06 | |
infinite-scale | anyone has an idea for a 2 server setup for nova and a 2 server setup for swift? | 08:07 |
infinite-scale | I imagine 2 cluster each with an allinone server | 08:07 |
infinite-scale | in a later stage you could just add nodes to the clusters. | 08:08 |
*** fantasy has joined #openstack | 08:10 | |
*** rcc has joined #openstack | 08:16 | |
*** MarkAtwood has quit IRC | 08:18 | |
*** fantasy has quit IRC | 08:20 | |
radek | anyone ? | 08:22 |
*** tjikkun has joined #openstack | 08:25 | |
*** tjikkun has joined #openstack | 08:25 | |
*** infinite-scale has quit IRC | 08:26 | |
*** DigitalFlux has joined #openstack | 08:27 | |
*** DigitalFlux has joined #openstack | 08:27 | |
*** hggdh has joined #openstack | 08:28 | |
*** rchavik has quit IRC | 08:31 | |
*** zul has quit IRC | 08:32 | |
*** fantasy has joined #openstack | 08:34 | |
*** kashyap has quit IRC | 08:34 | |
*** fantasy has quit IRC | 08:35 | |
*** hggdh has quit IRC | 08:43 | |
*** arun_ has quit IRC | 08:48 | |
*** infinite-scale has joined #openstack | 08:50 | |
*** mcclurmc has joined #openstack | 08:52 | |
*** watcher has joined #openstack | 08:53 | |
*** kraay has quit IRC | 08:58 | |
*** lurkaboo is now known as purkaboo | 09:04 | |
*** purkaboo is now known as purpaboo | 09:04 | |
*** dendrobates is now known as dendro-afk | 09:06 | |
*** guigui has quit IRC | 09:07 | |
*** guigui has joined #openstack | 09:07 | |
*** s1cz has quit IRC | 09:09 | |
*** s1cz has joined #openstack | 09:10 | |
*** infinite-scale has quit IRC | 09:17 | |
*** infinite-scale has joined #openstack | 09:19 | |
*** jokajak` has joined #openstack | 09:19 | |
*** jokajak has quit IRC | 09:19 | |
*** Eyk has joined #openstack | 09:22 | |
*** bkkrw has joined #openstack | 09:28 | |
*** Eyk has quit IRC | 09:31 | |
*** miclorb_ has joined #openstack | 09:37 | |
*** infinite-scale has quit IRC | 09:39 | |
*** zul has joined #openstack | 09:45 | |
*** taihen_ is now known as taihen | 09:47 | |
*** perestrelka has joined #openstack | 09:50 | |
*** anticw has quit IRC | 09:55 | |
*** zul has quit IRC | 09:56 | |
*** anticw has joined #openstack | 09:57 | |
*** Eyk has joined #openstack | 10:00 | |
*** infinite-scale has joined #openstack | 10:03 | |
infinite-scale | scripted installation isn't working properly | 10:03 |
infinite-scale | I tried to follow this installation: http://docs.openstack.org/cactus/openstack-compute/admin/content/scripted-ubuntu-installation.html | 10:04 |
*** rchavik has joined #openstack | 10:06 | |
*** CloudChris has joined #openstack | 10:09 | |
*** adjohn has quit IRC | 10:09 | |
*** CloudChris has left #openstack | 10:09 | |
*** winston-d has quit IRC | 10:15 | |
*** lborda has joined #openstack | 10:17 | |
*** rchavik has quit IRC | 10:19 | |
*** rchavik has joined #openstack | 10:19 | |
*** rchavik has joined #openstack | 10:19 | |
*** guigui has quit IRC | 10:21 | |
*** zul has joined #openstack | 10:21 | |
*** Eyk_ has joined #openstack | 10:24 | |
*** Eyk has quit IRC | 10:26 | |
*** santhosh_ has joined #openstack | 10:30 | |
*** santhosh has quit IRC | 10:30 | |
*** santhosh_ is now known as santhosh | 10:30 | |
*** Eyk_ has quit IRC | 10:31 | |
*** zul has quit IRC | 10:33 | |
*** lborda has quit IRC | 10:37 | |
*** Eyk_ has joined #openstack | 10:46 | |
*** santhosh has quit IRC | 10:52 | |
*** fabiand__ has joined #openstack | 10:54 | |
*** santhosh has joined #openstack | 10:55 | |
*** jfluhmann has quit IRC | 10:56 | |
*** pllopis has joined #openstack | 11:00 | |
pllopis | hello | 11:00 |
*** Eyk_ has quit IRC | 11:02 | |
*** fabiand__ has quit IRC | 11:08 | |
*** Ryan_Lane has quit IRC | 11:10 | |
*** omidhdl has left #openstack | 11:14 | |
*** miclorb_ has quit IRC | 11:20 | |
alekibango | should swift be working on 1 server only ( i mean 1 copy only style on one machine) | 11:21 |
*** miclorb_ has joined #openstack | 11:22 | |
*** ChameleonSys has quit IRC | 11:25 | |
*** ChameleonSys has joined #openstack | 11:25 | |
*** ctennis has quit IRC | 11:28 | |
*** markvoelker has joined #openstack | 11:32 | |
*** Eyk has joined #openstack | 11:33 | |
infinite-scale | alekibango, you mean a all-in.one server solution? | 11:33 |
alekibango | yes... kind of | 11:33 |
alekibango | without replication | 11:33 |
alekibango | just to test it | 11:33 |
infinite-scale | http://swift.openstack.org/development_saio.html | 11:33 |
infinite-scale | should do it I think | 11:34 |
infinite-scale | if you wanna try to use a multinode setup you can still have no replication. | 11:34 |
alekibango | will it work with only one device and and with only 1 object server ? | 11:34 |
alekibango | not like 1/2/3/4 .conf | 11:34 |
infinite-scale | as the number of the replica of an object is realted to the number of clusters | 11:34 |
alekibango | did someone try it with only 1? | 11:35 |
infinite-scale | you mean with one node? | 11:35 |
infinite-scale | or one server at all | 11:35 |
alekibango | yes | 11:35 |
alekibango | one everything | 11:35 |
zigo-_- | As in, one storage device. | 11:36 |
infinite-scale | yeah this is possible | 11:36 |
zigo-_- | http://paste.openstack.org/show/1313/ | 11:36 |
infinite-scale | the lin is doing it in a vm I think | 11:36 |
zigo-_- | I got this error... | 11:36 |
zigo-_- | I'm quite stuck for a long time with it. | 11:36 |
alekibango | zigo-_-: its more readable on one line lol | 11:37 |
infinite-scale | http://nova.openstack.org/devref/development.environment.html | 11:37 |
infinite-scale | perhaps that will help | 11:38 |
infinite-scale | alekibango, | 11:38 |
zigo-_- | infinite-scale: alekibango thinks this error is because of one device only ... | 11:38 |
alekibango | it was just idea | 11:38 |
zigo-_- | I think it has nothing to do, this is pure auth issue. | 11:38 |
alekibango | yes | 11:38 |
alekibango | i agree its auth | 11:38 |
zigo-_- | infinite-scale: What do you think? | 11:38 |
infinite-scale | Not sure about the auth stuff | 11:39 |
infinite-scale | has anything to do with the one device setup | 11:39 |
infinite-scale | shouldn't be an issue | 11:39 |
zigo-_- | I'll ask later when there's more people on the channel ... | 11:39 |
alekibango | but still i wander if it should work with one server only :) | 11:39 |
infinite-scale | something to do with sql probably | 11:40 |
infinite-scale | alekibango, it is working | 11:40 |
zigo-_- | Does Swift uses MySQL ? | 11:40 |
infinite-scale | but it isn't a production environmnt | 11:40 |
infinite-scale | I thought for user stuff yes | 11:41 |
infinite-scale | perhaps it was just nova could be not sure at the moment | 11:41 |
infinite-scale | anyway I'm off. another uni course^^ | 11:41 |
*** ctennis has joined #openstack | 11:42 | |
zigo-_- | Cheers. | 11:42 |
*** zul has joined #openstack | 11:43 | |
*** vernhart has joined #openstack | 11:43 | |
*** icarus901 has quit IRC | 11:44 | |
*** miclorb_ has quit IRC | 11:45 | |
*** infinite-scale has quit IRC | 11:46 | |
*** Jordandev has joined #openstack | 11:47 | |
ccooke | Hiya | 11:47 |
ccooke | Anyone know the minimum components necessary to make nova-compute work? | 11:47 |
*** Jordandev has quit IRC | 11:48 | |
*** guigui has joined #openstack | 11:56 | |
*** dirkx__ has joined #openstack | 12:01 | |
*** guigui has joined #openstack | 12:03 | |
*** GasbaKid has joined #openstack | 12:03 | |
*** hggdh has joined #openstack | 12:09 | |
*** hggdh has quit IRC | 12:11 | |
*** hggdh has joined #openstack | 12:12 | |
*** infinite-scale has joined #openstack | 12:12 | |
*** zul has quit IRC | 12:15 | |
*** rchavik has quit IRC | 12:16 | |
*** scalability-junk has joined #openstack | 12:16 | |
*** dendro-afk is now known as dendrobates | 12:16 | |
*** shentonfreude has quit IRC | 12:17 | |
*** katkee has joined #openstack | 12:18 | |
*** katkee has quit IRC | 12:20 | |
*** rackerhacker has quit IRC | 12:26 | |
*** kakella has joined #openstack | 12:27 | |
*** kakella has left #openstack | 12:27 | |
*** rackerhacker has joined #openstack | 12:33 | |
alekibango | ccooke: you need strong will and some luck | 12:47 |
alekibango | :) | 12:47 |
*** GasbaKid has quit IRC | 12:47 | |
alekibango | ccooke: try reading this http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/ | 12:47 |
alekibango | if you are on maverick | 12:48 |
*** zul has joined #openstack | 12:51 | |
*** infinite-scale has quit IRC | 12:53 | |
*** dprince has joined #openstack | 12:55 | |
*** KnuckleSangwich has quit IRC | 12:55 | |
*** jokajak` has quit IRC | 12:57 | |
*** jokajak` has joined #openstack | 12:57 | |
*** jokajak` is now known as jokajak | 12:57 | |
*** shentonfreude has joined #openstack | 13:01 | |
*** zul has quit IRC | 13:06 | |
*** aliguori has joined #openstack | 13:06 | |
*** dendrobates is now known as dendro-afk | 13:06 | |
*** zul has joined #openstack | 13:06 | |
*** hggdh has quit IRC | 13:07 | |
*** nacx has quit IRC | 13:09 | |
*** zul has quit IRC | 13:12 | |
*** hadrian has joined #openstack | 13:12 | |
ccooke | alekibango: I have a working openstack on natty at the moment | 13:15 |
alekibango | aha | 13:15 |
ccooke | I'm currently working on *just* a nova-compute node | 13:15 |
alekibango | if you are adding only | 13:15 |
ccooke | just double-checking what it needs to talk to | 13:15 |
alekibango | compute should be ok iirc -- provided you have storage space or storage nodes along | 13:17 |
ccooke | I'm building a natty chroot to run on XenServer nodes | 13:17 |
*** zenmatt has joined #openstack | 13:17 | |
ccooke | but due to Fun, my Xen server is in a different security zone to the rest of nova | 13:17 |
ccooke | so I'm port forwarding. | 13:17 |
ccooke | Should be fine, though :-) | 13:18 |
alekibango | ah this might kill you lol | 13:18 |
*** bcwaldon has joined #openstack | 13:18 | |
alekibango | natty chroot? why? | 13:18 |
alekibango | nova is best on bare metal | 13:18 |
ccooke | Not on a XenServer build | 13:19 |
alekibango | i am not sure i understand what are you trying to do | 13:19 |
ccooke | which is based on an out-of-date redhat with limited space | 13:19 |
*** rchavik has joined #openstack | 13:19 | |
*** rchavik has joined #openstack | 13:19 | |
ccooke | Basically, for openstack XenServer (that's Citrix Xen, not open-source Xen) you must have a nova-compute per hypervisor and it *must* either run in the dom0 or a domU on the hypervisor you want it to control | 13:20 |
ccooke | so, there are two approaches: Use a VM for the nova-compute binary or install native into dom0 | 13:20 |
alekibango | ah, i dont know a bit about citrix xen + nova | 13:21 |
ccooke | I don't like the VM approach due to it being messy and awkward | 13:21 |
ccooke | so I'm building a sensible approach to installing it on the XenServer dom0 | 13:21 |
alekibango | ic | 13:21 |
ccooke | there's limited space and installing nova into the base image would need *huge* changes | 13:22 |
ccooke | very unclean | 13:22 |
ccooke | Or... I can build a minimised natty chroot. Make it into a squashfs image, and wrap it around with scripts | 13:22 |
alekibango | ic now... | 13:22 |
ccooke | You then get a versioned, packagable "nova-compute" blob you can install onto a XenServer | 13:22 |
scalability-junk | who has a production environment running? with several service servers etc. | 13:23 |
ccooke | it's sill around 260M all told, but that is reasobale | 13:23 |
scalability-junk | do you have a second mysql server? | 13:23 |
ccooke | reasonable! | 13:23 |
alekibango | ccooke: its possible that nova might miss some depends :) be on alert and report it when it happens | 13:23 |
ccooke | alekibango: yeah, caught out before on that :-) | 13:23 |
alekibango | dont suspect a friend! report him! | 13:23 |
alekibango | er.. bug | 13:23 |
*** patcoll has joined #openstack | 13:23 | |
*** santhosh_ has joined #openstack | 13:28 | |
*** dendro-afk is now known as dendrobates | 13:28 | |
*** infinite-scale has joined #openstack | 13:28 | |
*** Zangetsue has quit IRC | 13:28 | |
*** Zangetsue has joined #openstack | 13:29 | |
*** santhosh has quit IRC | 13:29 | |
*** santhosh_ is now known as santhosh | 13:29 | |
*** msivanes has joined #openstack | 13:29 | |
scalability-junk | noone here with a production environment? | 13:30 |
scalability-junk | or is there any docs about the environment at rackspace? | 13:30 |
*** amccabe has joined #openstack | 13:31 | |
*** fabiand__ has joined #openstack | 13:35 | |
*** lborda has joined #openstack | 13:36 | |
ccooke | scalability-junk: what are you trying to do? | 13:37 |
scalability-junk | implement my own private cloud | 13:38 |
scalability-junk | but I read in the docs that the minimum would be a 4 server setup for swift + 4 for nova | 13:38 |
scalability-junk | and 8 server is a bit too much for a student right now :D | 13:38 |
j05h | scalability-junk: you can do it with one. | 13:39 |
scalability-junk | production environment with failover? | 13:39 |
j05h | now you have a requirement ;) | 13:39 |
scalability-junk | The plan was 2 cluster with one all in one server | 13:39 |
scalability-junk | so that each service is there twice | 13:39 |
scalability-junk | but I'm not sure on how this could communicate | 13:40 |
scalability-junk | replicated database and so on | 13:40 |
j05h | it seems like it depends on what kind of availability you're looking for and what your use cases are. you could also do backups and restore. if you're on a budget, i'm not sure you'd want to leave half of your hardware unused. | 13:42 |
scalability-junk | so what would I need to backup for me to easy get back up | 13:43 |
scalability-junk | ? | 13:43 |
scalability-junk | and what setup would you say would be alright. My budget is for a maximum of 4 server 2 for swift and 2 for nova | 13:43 |
*** Zangetsue has quit IRC | 13:44 | |
*** f4m8 is now known as f4m8_ | 13:45 | |
*** krish|wired-in has quit IRC | 13:45 | |
*** santhosh has quit IRC | 13:47 | |
scalability-junk | j05h: I thought with using 2 clusters I have a sort of HA environment. because of the budget I would use the single server per cluster as an all in one (+nova-compute) but I wasn't sure how this could be done well. | 13:48 |
scalability-junk | So I thought seeing a bigger production environment would be great to study the infrastructure and use it for this micro start^^ | 13:49 |
*** lborda has quit IRC | 13:49 | |
*** lborda has joined #openstack | 13:50 | |
zigo-_- | :) | 13:50 |
* scalability-junk zigo :( | 13:52 | |
alekibango | scalability-junk: there is not enough companies using NOVA in production now | 13:53 |
alekibango | and 4 servers is not enough to get started | 13:53 |
alekibango | 5 = small test, proving that you can do it | 13:53 |
*** robbiew has joined #openstack | 13:53 | |
alekibango | 20 = beta | 13:53 |
alekibango | 100+ => production | 13:53 |
alekibango | if you have only 4 total... maybe you should rather use virt-manager instead | 13:54 |
*** robbiew has left #openstack | 13:54 | |
scalability-junk | alekibango: it isn't like I haven't tried around, but I wanted to try openstack now. Already used eucalyptus, nimbula, opennebula etc. | 13:55 |
alekibango | scalability-junk: nova doesnt care how you replicate your database (its your job) | 13:55 |
alekibango | nova can take care of replicating your virtual drives, but thats not enabled by default and its not trivial | 13:55 |
*** rchavik has quit IRC | 13:55 | |
scalability-junk | Yeah I justed wanted to ask if someone has a production environment and could give me a few hints on how they make it HA and stuff | 13:56 |
alekibango | scalability-junk: openstack is prolly hardest to get installed right (at least nova) | 13:56 |
alekibango | but it has greatest adaptability of all | 13:56 |
*** lborda has quit IRC | 13:56 | |
*** fabiand__ has quit IRC | 13:56 | |
alekibango | thats maybe the problem even, as there are mirriads of ways how to install it | 13:56 |
*** pllopis has left #openstack | 13:56 | |
scalability-junk | that's why I wanna try it, I love the way it's doing things and I love python :D | 13:57 |
scalability-junk | anyway so you think it's not at all possible to use 4 server for a production environment with swift and nova? | 13:57 |
alekibango | it might be | 13:57 |
alekibango | but it will be 'production' | 13:58 |
alekibango | with little added value | 13:58 |
scalability-junk | the value is scalability | 13:58 |
alekibango | if you will have 40 servers later, ok | 13:58 |
scalability-junk | and experience for the most part :P | 13:58 |
alekibango | if not, nova might be just big cannon for you | 13:59 |
alekibango | too big i mean | 13:59 |
alekibango | scalability-junk: but nova is fun, really | 13:59 |
scalability-junk | that's why I tried eucalyptus and so on, but that's not what I wanna use or try or whatever | 13:59 |
alekibango | scalability-junk: i have similar setup, 4 servers | 14:00 |
zigo-_- | I believe that lot's of hard issues of swift can be solved with a bit of debconf and default config files. | 14:00 |
alekibango | and it can work | 14:00 |
alekibango | but its not worh it for production | 14:00 |
zigo-_- | I don't understand why currently there's no default one ... | 14:00 |
scalability-junk | 4 server setup only nova? | 14:00 |
Eyk | to install swift 1.4dev for production, is "python setup.py develop" right or do i have to change the "develop" in something else? | 14:00 |
alekibango | swift for production is great | 14:00 |
alekibango | nova is the hard one | 14:00 |
alekibango | :) | 14:01 |
alekibango | Eyk: i use swift cactus 1.3. | 14:01 |
creiht | Eyk: a good starting point is here: http://swift.openstack.org/howto_installmultinode.html | 14:01 |
alekibango | from packages | 14:01 |
alekibango | scalability-junk: you will start liking nova when you think about 100 or more nodes | 14:02 |
alekibango | thats when its excelling | 14:02 |
scalability-junk | alekibango: If I only go with easy thinks and things aimed at my size how can I grow ;) | 14:02 |
alekibango | especially with some configuration management system | 14:02 |
scalability-junk | like puppet? | 14:02 |
alekibango | puppet or chef | 14:02 |
alekibango | i started playing with chef today | 14:02 |
*** galthaus has joined #openstack | 14:03 | |
alekibango | i use fai for install :) | 14:03 |
Eyk | i just dont know with which command I should install the downloaded trunk version "python setup.py develop" <-- is this right? | 14:03 |
alekibango | but i still have issues.. its not working as i want it to | 14:03 |
creiht | Eyk: python setup.py install | 14:03 |
creiht | will install the python packages with python | 14:03 |
scalability-junk | do you have failover methods? | 14:03 |
scalability-junk | 2 services like api and so on? | 14:03 |
scalability-junk | or are you just going with one server for the services and one database and 3 clusters with single nodes in them? | 14:04 |
alekibango | scalability-junk: rather do 4 nodes for swift | 14:04 |
alekibango | 1 node for nova controller and 3 compute nodes | 14:04 |
alekibango | combine into one system if you wish | 14:04 |
alekibango | to have headaches | 14:04 |
alekibango | :) | 14:04 |
Eyk | creiht, tnx | 14:04 |
alekibango | so you can test whats happening when 2 nodes go down | 14:05 |
scalability-junk | what would you do if the node for nova controller dies? | 14:05 |
creiht | scalability-junk: short story for swift, is you could get it installed on 2 nodes, but it isn't going to work optimally. You need a bare minimum of 3, and we recommend a bare minimum of 5 to handle failure scenarios | 14:05 |
alekibango | scalability-junk: you use big megaphone to call admin | 14:05 |
alekibango | :) | 14:05 |
alekibango | creiht: thanks you said it well | 14:06 |
alekibango | scalability-junk: and waht if nova-network node dies? | 14:06 |
scalability-junk | creiht: yeah read that in the docs. but starting small would be cool you know | 14:06 |
alekibango | scalability-junk: you can run more controller nodes | 14:06 |
alekibango | they dont care where they are | 14:07 |
creiht | scalability-junk: yeah I understand, but when we first wrote it, we weren't thinking that small of scale :) | 14:07 |
alekibango | scalability-junk: at least not for production | 14:07 |
alekibango | creiht: still,i like it very much | 14:07 |
creiht | We wrote it with the intention of starting off with petabytes of storage | 14:07 |
scalability-junk | creiht: I know I know, but why not trying. are there any docs on how to set up production environment with a lot of servers? | 14:08 |
*** keny has joined #openstack | 14:08 | |
creiht | scalability-junk: the doc I linked to above gets you started | 14:08 |
Eyk | when I remove or add a disk to a swift ring. Can I just add the disc and rebalance the ring on some node or do I need to do something else linke replicate or restart something? | 14:08 |
alekibango | scalability-junk: just those docs worked for me on 4 | 14:08 |
alekibango | one afternoon and swift was ok... unlike nova lol | 14:08 |
creiht | http://docs.openstack.org/cactus/openstack-object-storage/admin/content/ | 14:09 |
alekibango | nova manuals are often outdated and wrong :) | 14:09 |
creiht | is another good start | 14:09 |
creiht | scalability-junk: most of it has to do with handling failure and how much risk you are willing to take | 14:09 |
alekibango | scalability-junk: look on sheepdog for reliable storage... | 14:10 |
alekibango | you need at least 3/4 nodes to start with | 14:10 |
scalability-junk | do I overread stuff, I just found talk about the storage node failures | 14:10 |
*** robbiew1 has joined #openstack | 14:10 | |
*** tblamer has joined #openstack | 14:12 | |
creiht | scalability-junk: without going in to a lot of detail, in a 2 node swift cluster, if one node goes down, a large portion of operations (especially PUTs) will not succeed until that node is brought back up | 14:13 |
scalability-junk | ah ok | 14:14 |
scalability-junk | so 4 server minimum for swift | 14:14 |
*** hggdh has joined #openstack | 14:14 | |
alekibango | he said 5 :) | 14:14 |
scalability-junk | and 4 server minimum for nova that gets expensive ;) | 14:14 |
creiht | 3 is bare minimum | 14:14 |
alekibango | recommended 5 :) | 14:15 |
scalability-junk | ok | 14:15 |
creiht | yeah | 14:15 |
alekibango | servers are cheap now | 14:15 |
scalability-junk | 29€ per month^^ per server | 14:15 |
OutBackDingo | alekibango: not if their 120 core boxes :P | 14:15 |
alekibango | hehe | 14:15 |
alekibango | OutBackDingo: where can i get those? | 14:15 |
OutBackDingo | alekibango: from me :P | 14:15 |
alekibango | url? | 14:15 |
alekibango | i didnt see those here, maybe i was not looking much | 14:16 |
OutBackDingo | you want to buy? | 14:16 |
OutBackDingo | or lease? | 14:16 |
alekibango | i want to see first | 14:16 |
OutBackDingo | we sell SGI | 14:16 |
OutBackDingo | :) | 14:16 |
alekibango | :) | 14:16 |
OutBackDingo | and Intel | 14:16 |
*** robbiew1 has quit IRC | 14:16 | |
OutBackDingo | and networking gear and high end storage | 14:17 |
*** cdbs is now known as cdbs-isnt-good | 14:17 | |
alekibango | OutBackDingo: i would like to see new intel boards with many cpus | 14:17 |
*** cdbs-isnt-good is now known as cdbs | 14:17 | |
alekibango | even if i used sgi for years :) | 14:17 |
OutBackDingo | alekibango: ive got an Intel box here with 48 and 96 cores | 14:18 |
nhm | alekibango: me too. I'd like to see boards with hypercube or even 3D torus QPI setups. | 14:18 |
alekibango | and btw for swift - i think about making my own boxes - with cheap arm cpu :) | 14:18 |
nhm | OutBackDingo: 96 cores is interesting. All on the same board? | 14:18 |
alekibango | such machine can be smaller than disk :) | 14:18 |
OutBackDingo | nhm: yupp | 14:18 |
nhm | OutBackDingo: How many QPI links per chip? | 14:19 |
alekibango | so i would just stack drives up, stick computers on them where appropriate ... to have storage (LOL) | 14:19 |
Eyk | a arm system is enough for swift? | 14:19 |
alekibango | why not | 14:19 |
alekibango | when its enough to stream video | 14:19 |
alekibango | ... i need to test it first, right | 14:20 |
alekibango | but i think it will work well | 14:20 |
Eyk | swift is very resource friendly ? | 14:20 |
alekibango | imho its not cpu hog, is it? | 14:20 |
alekibango | hmm but you are right, i will tell those arm-people to test it for me first :) | 14:21 |
OutBackDingo | nhm: looks like 2 | 14:21 |
alekibango | having computer for 30$ could be fun | 14:22 |
alekibango | especially if it has lower consumption | 14:22 |
OutBackDingo | nhm: actually spec sheet says 4 | 14:22 |
nhm | OutBackDingo: huh, that's surprising. Given how many sockets there must be on that board I can't imagine you get very fast socket->socket communication. | 14:22 |
*** grapex has joined #openstack | 14:23 | |
*** ianp100 has joined #openstack | 14:23 | |
*** johnpur has joined #openstack | 14:23 | |
*** ChanServ sets mode: +v johnpur | 14:23 | |
*** blamar_ has joined #openstack | 14:23 | |
ianp100 | what do the swift scripts swift-stats-populate and swift-stats-report do? | 14:23 |
*** blamar_ is now known as blamar | 14:23 | |
OutBackDingo | alekibango: question why is 5 recommended? | 14:24 |
*** larry__ has joined #openstack | 14:24 | |
*** shentonfreude has quit IRC | 14:25 | |
alekibango | [2011-05-10 16:13] <creiht> scalability-junk: without going in to a lot of detail, in a 2 node swift cluster, if one node goes down, a large portion of operations (especially PUTs) will not succeed until that node is brought back up | 14:25 |
alekibango | OutBackDingo: ^^ | 14:25 |
*** CloudChris has joined #openstack | 14:25 | |
*** zenmatt has quit IRC | 14:26 | |
creiht | ianp100: http://swift.openstack.org/overview_stats.html | 14:26 |
*** CloudChris has left #openstack | 14:26 | |
annegentle | kpepple: ping | 14:26 |
annegentle | kpepple: ah, you're in Seoul now, will email :) | 14:28 |
notmyname | ianp100: creiht: swift-stats-populate and swift-stats-report are the dispersion reports (and will be renamed for clarity in the next release, I think) | 14:29 |
alekibango | hmm those arm boards i was thinking about has no sata... only usb2 ... hmm need to wait for better :) | 14:29 |
ianp100 | creiht: ive looked at that link but i cant see swift-stats-populate and swift-stats-report are mentioned | 14:29 |
*** grapex has quit IRC | 14:29 | |
notmyname | ianp100: http://swift.openstack.org/admin_guide.html#cluster-health | 14:30 |
*** spectorclan_ has joined #openstack | 14:30 | |
creiht | notmyname: heh... sorry | 14:30 |
creiht | I always do that | 14:30 |
notmyname | :-) and that's why it needs to be renamed | 14:30 |
creiht | indeed | 14:30 |
notmyname | I think gholt has a merge for that now | 14:31 |
creiht | ahh cool | 14:31 |
*** jkoelker has joined #openstack | 14:31 | |
ianp100 | notmyname: perfect, thanks! | 14:32 |
*** shentonfreude has joined #openstack | 14:33 | |
scalability-junk | damn I hate being a student with no money to sponsor my own small production cloud :P | 14:33 |
*** zenmatt has joined #openstack | 14:34 | |
*** skiold has joined #openstack | 14:35 | |
scalability-junk | anyway I'm trying this stuff 24/7 to run nova on a 2 cluster stuff with failover :D | 14:40 |
scalability-junk | will be fun :D | 14:40 |
keny | I am trying to get openstack running on debian. I did an installation from source. Things seem to be working, except nova-compute won't start ... | 14:41 |
keny | I captured the error log: http://pastebin.com/4pBAPVVG | 14:41 |
keny | ClassNotFound: Class get_connection could not be found <- I'm no python expert, but get_connection looks like a method to me | 14:42 |
*** dendrobates is now known as dendro-afk | 14:42 | |
*** grapex has joined #openstack | 14:45 | |
*** niksnut has quit IRC | 14:46 | |
*** rnirmal has joined #openstack | 14:47 | |
Eyk | when I change a swift builder file, do I have to copy the .ring.gz file immediately to all nodes and reload there service? (I dont understand the ring management from the docs) | 14:48 |
*** hggdh has quit IRC | 14:53 | |
*** skiold has quit IRC | 14:57 | |
*** dragondm has joined #openstack | 14:59 | |
*** shentonfreude has quit IRC | 14:59 | |
*** bkkrw has quit IRC | 15:01 | |
ccooke | Any of the XenServer people around atm? | 15:01 |
ccooke | I now have a running nova-compute on my XenServer hypervisor | 15:02 |
*** mgoldmann has quit IRC | 15:02 | |
ccooke | but it's complaining that it can't find a network for bridge xenbr0 | 15:02 |
*** photron has joined #openstack | 15:02 | |
ccooke | my config appears to be correct as far as the documentation goes, but... | 15:02 |
*** shentonfreude has joined #openstack | 15:03 | |
*** hagarth has quit IRC | 15:06 | |
*** skiold has joined #openstack | 15:08 | |
*** ianp100 has quit IRC | 15:09 | |
*** niksnut has joined #openstack | 15:13 | |
*** zul has joined #openstack | 15:13 | |
*** obino has joined #openstack | 15:16 | |
galthaus | Eyk: yes, that is my understanding. | 15:16 |
gholt | Eyk: galthaus: Copying the ring files out is enough. Every service that uses the ring checks the file's mtime occasionally and will reload its copy of the ring automatically. | 15:17 |
gholt | When copying the ring out, it is best to copy it to a temporary location and then move it into place. You don't want to end up with half-rings or anything. :) | 15:18 |
galthaus | Ah - very nice | 15:18 |
Eyk | thank you, for the infos | 15:20 |
*** larry__ has quit IRC | 15:20 | |
*** guigui has quit IRC | 15:22 | |
dprince | jaypipes: So are you going to do the PPA package updates for glance API versioning? Soren? | 15:24 |
jaypipes | dprince: I will work with soren once I get your latest fixes done. | 15:24 |
ccooke | dprince: Got a sec for a Xen-related query? | 15:24 |
jaypipes | dprince: and vishy, sorn, perhaps it's worth a quick call today to coordinate... the Glance API is changing (adding a version identifier) and existing Glance clients *will* break. | 15:25 |
jaypipes | soren: ^^ | 15:25 |
dprince | jaypipes: cool. So I'm dying to try that out. We can test smoke test that fairly easy with our VPC setup so give me or Waldon a shout when that installer branch is in. | 15:25 |
jaypipes | dprince: I'll have those fixes up there within half an hour. really appreciate your review. | 15:26 |
dprince | jaypipes: I need to coordinate w/ you on a couple CHef changes as well but I can bang those out fairly quickly. | 15:26 |
dprince | jaypipes: Sounds good. NP | 15:26 |
jaypipes | dprince: ack on chef. | 15:27 |
dprince | ccooke: what is up? I'm not the Xen expert but shoot. | 15:27 |
jaypipes | dprince: mainly around the conf changes, I assume? | 15:27 |
creiht | anyone know off the top of their head where the api extensions blueprint is? | 15:27 |
dprince | jaypipes: yes sir. | 15:27 |
ccooke | dprince: you're the only name I recognise active who has previously talked to me about it :-) | 15:27 |
*** enigma1 has joined #openstack | 15:28 | |
dprince | ccooke: sounds like it is time to change my handle. Anyway. What you got? | 15:28 |
dprince | +creiht: That was part of the OS API v1.1 blueprint right? | 15:28 |
westmaas | creiht: there isn't a seperate one, its rolled into the 1.1 spec | 15:28 |
ccooke | dprince: I have a nova-compute that appears to be basically working on a XenServer | 15:28 |
creiht | dprince, westmaas: ahh thanks! | 15:29 |
ccooke | dprince: but it's complaining when I try to create an instance: "Error: Network could not be found for bridge xenbr0" | 15:29 |
*** zul has quit IRC | 15:29 | |
ccooke | dprince: as far as I can tell, I've followed the docs. Does this mean I also need a nova-network on the hypervisor? | 15:29 |
dprince | ccooke: What bridges do you have? | 15:29 |
dprince | ccooke: brctl show | 15:30 |
ccooke | this is a standard XenServer box - xenbr0 has been preconfigured correctly | 15:30 |
ccooke | and yes, it (and xenbr1-3) are in the output of brctl show | 15:30 |
dprince | ccooke: sure. That is correct. | 15:30 |
*** fabiand__ has joined #openstack | 15:31 | |
ccooke | What is correct? Context unclear :-) | 15:31 |
dprince | ccooke: Sorry. Just meant that it was correct that XenServer creates that by default. | 15:32 |
ccooke | Ah, yes. | 15:32 |
dprince | ccooke: checking my setup.... | 15:32 |
ccooke | (I have a network configured on the box the rest of nova is running on, but I don't see any way to configure that as including any particular host or bridge. My nova config is set to the flat network manager as per the documentation (and all the xen-specific lines in the example config are present) | 15:33 |
dprince | ccooke: ifconfig xenbr0 | 15:34 |
dprince | ccooke: does xenbr0 have an IP? | 15:34 |
ccooke | dprince: it's up and has an IP | 15:35 |
soren | jaypipes: I'm at the ubuntu developer summit this week, so I haven't much time for other stuff. The more specific you can be about the changes you need me to make, the better. | 15:35 |
soren | jaypipes: ...since I haven't really followed the discussion much, to be honest. | 15:35 |
*** h0cin has joined #openstack | 15:36 | |
dprince | ccooke: So to be clear. Xenbr0 should be on the XenServer (host machine). Not on the guest utility VM running nova-compute right? | 15:37 |
ccooke | dprince: I am running nova-compute directly on the hypervisor | 15:37 |
dprince | ccooke: Oh. | 15:37 |
dprince | ccooke: Did Ant and Pvo recommend you set things up like that the other day? | 15:38 |
jaypipes | soren: no worries. so, the Glance API has been changed to add versioning into our URI structure. In addition, the previous single glance.conf has been broken into glance-api.conf and glance-registry.conf, which means the packaging/install of config files needs to be changed in coordination with the glance API version branch hitting Glance's trunk. | 15:38 |
dprince | ccooke: I'm not running it that way. I'm actually not even sure the codebase supports it. | 15:38 |
soren | jaypipes: Ok. Is there a useful migration path from the single-file layout to this? | 15:38 |
ccooke | dprince: it was given as an equal option, and we (that is, my employer) decided that we'd much prefer to see a managable nova-compute for the HV | 15:38 |
jaypipes | bcwaldon, dprince: what are your thoughts on /v1.0/images vs. /v1/images? I'm leaning towards /v1.0/images because that is how the Swift API is structured. | 15:39 |
ccooke | dprince: I'm currently working on a squashfs blob with a minimal natty root in it, with some wrapper scripts. | 15:39 |
jaypipes | soren: hmm... good question. | 15:39 |
ccooke | dprince: currently comes to a bit over 220M, which is easily managed directly on dom0 | 15:39 |
jaypipes | soren: not sure what is possible and what isn't... perhaps I should write a simple migration script for the glance.conf? | 15:40 |
dprince | ccooke: Hmmm. So a chroot of sorts? | 15:40 |
soren | Oh, by the way, Oneiric (Ubuntu next version) will very likely have Xen dom0 support. | 15:40 |
ccooke | dprince: basically | 15:40 |
soren | I guess some of you will find that interesting :) | 15:40 |
ccooke | dprince: read-only, though, and held in a single file so it's easy to version and replace | 15:40 |
dprince | ccooke: Also natty? I haven't tested it with XenServer yet. | 15:40 |
ccooke | dprince: *vastly* more managable than running VMs | 15:41 |
soren | jaypipes: Not having looked at the reasons for the split it's hard for me to say something useful here. I'm just concerned about upgrade scenarios. | 15:41 |
ccooke | dprince: everything else I'm using is natty-based, so it seemed the best plan to keep to that. | 15:41 |
ccooke | dprince: openstack elsewhere seems to run fine on it. The XenServer libs are all python anyway, so there really shouldn't be any issues | 15:42 |
*** larry__ has joined #openstack | 15:42 | |
jaypipes | soren: so, the reason for the split was to align ourselves better with the way Swift structures its config files... | 15:42 |
ccooke | dprince: in your VM, do you run an instance of nova-network? | 15:43 |
soren | jaypipes: Ok. | 15:43 |
soren | jaypipes: So apart from the split, nothing has changed? | 15:43 |
jaypipes | soren: it also makes it easier to a) manage the 2 different Glance servers separately, and b) eventually the aim is to be able to package the registry and API servers separately. | 15:43 |
dprince | ccooke: Yes. I'm running maverick. It works great although you need to set --xenapi_remap_vbd_dev=true. | 15:43 |
soren | jaypipes: Should be a fairly simple migration. | 15:43 |
dprince | ccooke: I think that is probably fixed with natty but watch out for it too perhaps. | 15:44 |
ccooke | dprince: wait. Your vm *does* include nova-network running? | 15:44 |
jaypipes | soren: for the packaging, no, nothing else has changed. we need to work with vishy to coordinate changes that were made to the API and client, but that shouldn't affect packaging. | 15:44 |
ccooke | dprince: That sounds like my missing piece. | 15:44 |
dprince | ccooke: No. | 15:44 |
dprince | ccooke: Oh. Wait. Yeah. I see. | 15:44 |
jaypipes | baib | 15:45 |
dprince | ccooke: Your nova-network also needs to use the xenbr0 bridge too I think. | 15:45 |
jaypipes | biab, even... | 15:45 |
dprince | ccooke: Mine does. But it is on a different box. | 15:45 |
ccooke | dprince: right. Which means you need a nova-network on each hypervisor, too | 15:45 |
dprince | ccooke: Yeah. That should fix your issue I think. | 15:45 |
ccooke | dprince: as well as a nova-compute | 15:45 |
dprince | ccooke: In my setup nova-network is on a separate machine. I do however have a xenbr0 on that machine even though it isn't running XenServer. | 15:47 |
dprince | ccooke: There are a couple ways to go about getting this interface. In any case I think you are on the right track. | 15:47 |
ccooke | dprince: .... that is a very, very broken thing to have to do :-) | 15:47 |
*** zenmatt has quit IRC | 15:47 | |
dprince | ccooke: maybe a bit confusing I guess. A convention of sorts. nova-network has to have some way to obtain that info. | 15:48 |
dprince | ccooke: Are you using FlatDHCP? | 15:49 |
dprince | ccooke: or Vlan? | 15:49 |
ccooke | dprince: no settings say either way | 15:51 |
dprince | ccooke: What I was going to suggest is that you can have nova automatically setup the xenbr0 bridge for you on an unused interface if you use the --flat_interface flag. | 15:51 |
dprince | ccooke: So if you use --network_manager=nova.network.manager.FlatDHCPManager | 15:52 |
dprince | ccooke: --flat_network_bridge=xenbr0 | 15:52 |
dprince | ccooke: --flat_interface=eth1 | 15:52 |
*** Ryan_Lane has joined #openstack | 15:52 | |
*** fabiand__ has quit IRC | 15:52 | |
dprince | ccooke: Nova would then automatically create and bridge into that interface for you. | 15:52 |
dprince | ccooke: Just a suggestion. I use that. Something Vish added which is quite handy. | 15:53 |
ccooke | dprince: huh. Okay | 15:53 |
*** zenmatt has joined #openstack | 15:53 | |
ccooke | I'll give that a try | 15:53 |
dprince | ccooke: Anyway. Gotta go. I'll be online later today too. | 15:53 |
ccooke | thanks for the help | 15:54 |
*** krish|wired-in has joined #openstack | 15:55 | |
*** medberry is now known as med_out | 15:56 | |
*** zenmatt has quit IRC | 16:01 | |
*** dirkx__ has quit IRC | 16:02 | |
*** zenmatt has joined #openstack | 16:03 | |
*** Ryan_Lane has quit IRC | 16:09 | |
*** joearnold has joined #openstack | 16:09 | |
*** zenmatt has quit IRC | 16:10 | |
*** grapex has quit IRC | 16:14 | |
*** grapex has joined #openstack | 16:15 | |
*** lborda has joined #openstack | 16:17 | |
*** MarkAtwood has joined #openstack | 16:17 | |
*** nerens has quit IRC | 16:19 | |
*** maplebed has joined #openstack | 16:21 | |
*** mattray has joined #openstack | 16:22 | |
*** ccustine has joined #openstack | 16:23 | |
*** lborda has quit IRC | 16:23 | |
*** zigo-_- has quit IRC | 16:28 | |
*** nerens has joined #openstack | 16:29 | |
*** jakedahn has quit IRC | 16:29 | |
*** jakedahn has joined #openstack | 16:30 | |
*** daveiw has quit IRC | 16:35 | |
*** mgoldmann has joined #openstack | 16:39 | |
*** troytoman-away is now known as troytoman | 16:41 | |
*** dprince has quit IRC | 16:46 | |
*** jdurgin has joined #openstack | 16:47 | |
*** crescendo has quit IRC | 16:50 | |
*** zenmatt has joined #openstack | 16:51 | |
*** nerens has quit IRC | 16:55 | |
*** dprince has joined #openstack | 16:57 | |
*** purpaboo is now known as lurkaboo | 16:58 | |
*** Ryan_Lane has joined #openstack | 16:59 | |
*** nerens has joined #openstack | 17:02 | |
*** watcher has quit IRC | 17:02 | |
*** jeffk has joined #openstack | 17:04 | |
*** jeffk has quit IRC | 17:05 | |
*** jeffk has joined #openstack | 17:05 | |
*** jeffk has quit IRC | 17:07 | |
*** jeffkramer has joined #openstack | 17:08 | |
*** photron has quit IRC | 17:11 | |
*** jeffkramer has joined #openstack | 17:14 | |
*** MotoMilind has quit IRC | 17:16 | |
*** dirkx__ has joined #openstack | 17:18 | |
*** Ryan_Lane is now known as Ryan_Lane|brb | 17:19 | |
*** krish|wired-in has quit IRC | 17:20 | |
*** jakedahn has quit IRC | 17:26 | |
*** Ryan_Lane|brb is now known as Ryan_lane | 17:26 | |
*** larry__ has quit IRC | 17:26 | |
*** keny has quit IRC | 17:26 | |
*** ChameleonSys has quit IRC | 17:27 | |
*** MotoMilind has joined #openstack | 17:27 | |
*** ChameleonSys has joined #openstack | 17:32 | |
*** zaitcev has joined #openstack | 17:34 | |
*** tjikkun has quit IRC | 17:37 | |
*** Jordandev has joined #openstack | 17:37 | |
*** clauden has joined #openstack | 17:39 | |
*** aa___ has joined #openstack | 17:39 | |
aa___ | howdi... got some swift questions... | 17:40 |
aa___ | I'm trying to figure out why swauth-prep and friends hang once in a while | 17:40 |
*** zenmatt has quit IRC | 17:41 | |
aa___ | have folks ran into similar issues? | 17:41 |
*** krish|wired-in has joined #openstack | 17:47 | |
Eyk | is the S3 access to swift working in current trunk? If its known to be broken, than I could stop trying ;-) | 17:48 |
*** crescendo has joined #openstack | 17:49 | |
*** nelson has quit IRC | 17:49 | |
*** CloudChris has joined #openstack | 17:49 | |
*** nelson has joined #openstack | 17:50 | |
*** CloudChris has left #openstack | 17:50 | |
*** CloudChris has joined #openstack | 17:51 | |
*** photron_ has joined #openstack | 17:52 | |
*** krish|wired-in has quit IRC | 17:56 | |
*** MotoMilind1 has joined #openstack | 17:59 | |
*** MotoMilind has quit IRC | 17:59 | |
*** BK_man has joined #openstack | 18:01 | |
*** nijaba_afk has joined #openstack | 18:02 | |
*** nijaba has quit IRC | 18:03 | |
*** zenmatt has joined #openstack | 18:05 | |
*** tjikkun has joined #openstack | 18:09 | |
*** tjikkun has joined #openstack | 18:09 | |
*** Jordandev has quit IRC | 18:10 | |
*** MotoMilind1 has quit IRC | 18:11 | |
*** dragondm has quit IRC | 18:17 | |
*** MotoMilind has joined #openstack | 18:18 | |
*** shentonfreude1 has joined #openstack | 18:25 | |
*** mgius has joined #openstack | 18:26 | |
*** shentonfreude has quit IRC | 18:28 | |
*** mszilagyi has joined #openstack | 18:28 | |
*** mgius has quit IRC | 18:29 | |
*** cole has joined #openstack | 18:29 | |
*** dprince_ has joined #openstack | 18:30 | |
aa___ | are there any swift folks in the room? | 18:30 |
*** MotoMilind has quit IRC | 18:31 | |
*** openfly has joined #openstack | 18:36 | |
openfly | hey in the nova flag file what does max_gigabytes mean? is that maximum virtual ram or maximum virtual disk space to allocate? | 18:36 |
*** MotoMilind has joined #openstack | 18:38 | |
*** mgius has joined #openstack | 18:38 | |
openfly | it's max volume from what i read in scheduler code | 18:43 |
* openfly & | 18:43 | |
*** openfly has left #openstack | 18:43 | |
cole | aa___: openfly seems right..just went and looked it up | 18:44 |
*** jtran has joined #openstack | 18:47 | |
*** dragondm has joined #openstack | 18:49 | |
*** CloudChris has quit IRC | 18:51 | |
*** skiold has quit IRC | 18:55 | |
*** larry__ has joined #openstack | 18:55 | |
*** katkee has joined #openstack | 18:57 | |
*** daveiw has joined #openstack | 19:02 | |
*** MotoMilind has quit IRC | 19:08 | |
*** mattray has quit IRC | 19:08 | |
*** dirkx__ has quit IRC | 19:09 | |
*** joearnold has quit IRC | 19:11 | |
*** scalability-junk has quit IRC | 19:15 | |
*** dirkx__ has joined #openstack | 19:17 | |
aa___ | looking for someone to chat about swift and swauth issues... anyone around? | 19:18 |
*** dprince_ has quit IRC | 19:18 | |
dprince | exit | 19:18 |
dprince | exit | 19:18 |
*** dprince has quit IRC | 19:18 | |
*** MotoMilind has joined #openstack | 19:19 | |
*** dirkx__ has quit IRC | 19:19 | |
*** dirkx__ has joined #openstack | 19:21 | |
*** dirkx__ has quit IRC | 19:31 | |
*** dirkx__ has joined #openstack | 19:32 | |
*** MotoMilind has quit IRC | 19:32 | |
*** ohnoimdead has joined #openstack | 19:32 | |
*** amccabe has quit IRC | 19:33 | |
*** h1nch has quit IRC | 19:34 | |
*** dobber_ has joined #openstack | 19:35 | |
dobber_ | hi, can i pxe boot into stackops ? | 19:36 |
*** MotoMilind has joined #openstack | 19:44 | |
*** Eyk has quit IRC | 19:44 | |
*** photron_ has quit IRC | 19:46 | |
*** daveiw has left #openstack | 19:47 | |
*** joearnold has joined #openstack | 19:51 | |
*** amccabe has joined #openstack | 19:52 | |
*** dirkx__ has quit IRC | 19:53 | |
*** h1nch has joined #openstack | 19:54 | |
*** kbringard has joined #openstack | 19:59 | |
kbringard | howdy peoples! | 19:59 |
kbringard | quick ? | 19:59 |
kbringard | I've assigned a DNS name to my API's external IP | 19:59 |
kbringard | when I connect to it that way, I get a 403 | 19:59 |
kbringard | and I figured it out | 20:00 |
kbringard | all it took was me coming in here and asking, so I'd look like an idiot as soon as I asked | 20:00 |
kbringard | hah | 20:00 |
jtran | lol i hate it when that happens | 20:01 |
kbringard | indeed | 20:02 |
dsockwell | can nova understand shared local storage? performance issues aside, could I put all my local images on the same nfs share and not cause a panic? | 20:03 |
*** Ryan_lane is now known as Ryan_Lane | 20:04 | |
*** bcwaldon has quit IRC | 20:04 | |
*** bcwaldon has joined #openstack | 20:04 | |
*** baffle has joined #openstack | 20:06 | |
*** mgius has quit IRC | 20:09 | |
*** vernhart has quit IRC | 20:11 | |
*** mgius has joined #openstack | 20:11 | |
*** mgius is now known as Guest9008 | 20:12 | |
*** sdadh01 has quit IRC | 20:14 | |
*** rcc has quit IRC | 20:14 | |
*** ctennis has quit IRC | 20:14 | |
*** allsystemsarego has quit IRC | 20:14 | |
*** BK_man has quit IRC | 20:16 | |
*** Guest9008 is now known as mgius_ | 20:17 | |
*** imsplitbit has joined #openstack | 20:18 | |
kbringard | dsockwell: yep | 20:19 |
kbringard | in fact, if you want to use live migration, your instances have to reside on some kind of shared storage | 20:19 |
*** BK_man has joined #openstack | 20:19 | |
dsockwell | ok. the documentation says live-migration only works with kvm, is that correct? | 20:21 |
kbringard | probably... I've only used KVM so I can't say for sure | 20:21 |
kbringard | but, I can say I've live migrated machines and it does indeed work with KVM :-D | 20:21 |
*** ctennis has joined #openstack | 20:27 | |
*** ctennis has joined #openstack | 20:27 | |
*** bcwaldon has quit IRC | 20:29 | |
*** sdadh01 has joined #openstack | 20:29 | |
*** bcwaldon has joined #openstack | 20:29 | |
*** pguth66 has joined #openstack | 20:38 | |
*** carlp has joined #openstack | 20:41 | |
*** mgoldmann has quit IRC | 20:43 | |
*** ribo has joined #openstack | 20:43 | |
*** perestrelka has quit IRC | 20:48 | |
*** mattt has joined #openstack | 20:48 | |
jaypipes | #openstack-meeting in 10 minutes. | 20:50 |
dsockwell | jaypipes: real quick, do you know if live migration is limited to the kvm hypervisor? | 20:52 |
*** mgius_ is now known as mgius | 20:53 | |
*** mattray has joined #openstack | 20:53 | |
jaypipes | dsockwell: I believe so currently. | 20:54 |
jaypipes | dsockwell: but I could very well be wrong on that... | 20:54 |
dsockwell | jaypipes: the blueprint suggests that at least in cactus it is, thanks | 20:54 |
*** perestrelka has joined #openstack | 20:56 | |
*** msivanes has quit IRC | 20:58 | |
*** vernhart has joined #openstack | 20:58 | |
ttx | Meeting in 2 min. in #openstack-meeting | 20:59 |
*** mattray has quit IRC | 20:59 | |
*** galthaus has quit IRC | 21:03 | |
*** blamar_ has joined #openstack | 21:05 | |
*** antenagora has joined #openstack | 21:07 | |
*** watcher has joined #openstack | 21:13 | |
*** watcher has quit IRC | 21:13 | |
*** watcher has joined #openstack | 21:14 | |
*** antenagora has quit IRC | 21:16 | |
*** nati has joined #openstack | 21:18 | |
*** obino has quit IRC | 21:20 | |
*** Eyk has joined #openstack | 21:27 | |
*** pguth66 has quit IRC | 21:28 | |
*** mattray has joined #openstack | 21:31 | |
*** patcoll has quit IRC | 21:32 | |
*** midodan has joined #openstack | 21:32 | |
*** devcamca- has left #openstack | 21:33 | |
*** devcamca- has joined #openstack | 21:33 | |
*** CloudChris has joined #openstack | 21:34 | |
*** joearnold has quit IRC | 21:37 | |
*** amccabe has quit IRC | 21:37 | |
*** nerens has quit IRC | 21:37 | |
*** joearnold has joined #openstack | 21:37 | |
*** CloudChris has left #openstack | 21:37 | |
*** mszilagyi has quit IRC | 21:47 | |
*** spectorclan_ has quit IRC | 21:49 | |
*** ohnoimdead has quit IRC | 21:51 | |
*** nati has quit IRC | 21:53 | |
*** mgius has quit IRC | 21:53 | |
*** gondoi has quit IRC | 21:54 | |
*** _0x44 has left #openstack | 21:55 | |
*** mgius has joined #openstack | 21:55 | |
*** pguth66 has joined #openstack | 21:56 | |
*** mgius is now known as Guest8574 | 21:56 | |
*** imsplitbit has quit IRC | 21:58 | |
*** ChanServ changes topic to "Openstack Support Channel, Development in #openstack-dev | Wiki: http://wiki.openstack.org/ | Nova Docs: nova.openstack.org | Swift Docs: swift.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ | http://paste.openstack.org/" | 21:58 | |
*** markvoelker has quit IRC | 22:00 | |
*** bcwaldon has quit IRC | 22:01 | |
*** nati has joined #openstack | 22:02 | |
*** kbringard has quit IRC | 22:02 | |
*** tblamer has quit IRC | 22:04 | |
*** grapex has quit IRC | 22:04 | |
*** shentonfreude1 has quit IRC | 22:05 | |
*** dobber_ has quit IRC | 22:20 | |
*** grapex has joined #openstack | 22:20 | |
*** grapex has quit IRC | 22:25 | |
*** aa___ has quit IRC | 22:25 | |
*** nelson has quit IRC | 22:28 | |
*** nelson has joined #openstack | 22:28 | |
*** blamar_ has quit IRC | 22:29 | |
*** ohnoimdead has joined #openstack | 22:30 | |
*** ohnoimdead has left #openstack | 22:38 | |
*** grapex has joined #openstack | 22:40 | |
*** zaitcev has quit IRC | 22:40 | |
*** katkee has quit IRC | 22:42 | |
*** shentonfreude has joined #openstack | 22:44 | |
*** jeffkramer has quit IRC | 22:44 | |
*** zaitcev has joined #openstack | 22:44 | |
*** cole has quit IRC | 22:48 | |
*** watcher has quit IRC | 22:51 | |
*** miclorb_ has joined #openstack | 22:53 | |
*** troytoman is now known as troytoman-away | 22:54 | |
uvirtbot | New bug: #780784 in nova "KeyError in image snapshotting" [Undecided,New] https://launchpad.net/bugs/780784 | 22:57 |
*** miclorb_ has quit IRC | 22:59 | |
*** miclorb has joined #openstack | 22:59 | |
*** nati has quit IRC | 23:03 | |
*** rnirmal has quit IRC | 23:04 | |
*** midodan has quit IRC | 23:12 | |
*** jkoelker has quit IRC | 23:17 | |
*** crescendo has quit IRC | 23:19 | |
*** widodh has quit IRC | 23:19 | |
*** widodh has joined #openstack | 23:19 | |
*** Guest8574 has quit IRC | 23:19 | |
*** crescendo has joined #openstack | 23:20 | |
uvirtbot | New bug: #780788 in nova "EC2 API should allow query deleted objects" [Undecided,New] https://launchpad.net/bugs/780788 | 23:22 |
*** mattray has quit IRC | 23:24 | |
*** mattray has joined #openstack | 23:26 | |
*** enigma1 has quit IRC | 23:26 | |
*** grapex has quit IRC | 23:27 | |
*** BK_man has quit IRC | 23:35 | |
*** jtran has left #openstack | 23:36 | |
*** MotoMilind has quit IRC | 23:40 | |
*** johnpur has quit IRC | 23:48 | |
*** zenmatt has quit IRC | 23:52 | |
*** markwash_ has quit IRC | 23:52 | |
*** markwash_ has joined #openstack | 23:53 | |
*** BK_man has joined #openstack | 23:53 | |
*** BK_man has quit IRC | 23:54 | |
*** Eyk has quit IRC | 23:56 | |
*** zenmatt has joined #openstack | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!