*** littleidea has joined #openstack | 00:04 | |
*** mgius has quit IRC | 00:09 | |
*** ton_katsu has joined #openstack | 00:09 | |
*** jinjax has joined #openstack | 00:11 | |
jinjax | Any instructions manual for using XenServer 6.0 (beta) for instantiating VMs using Nova? I am not married to XenServer 6.0, Xen 5.6 will do. | 00:13 |
---|---|---|
*** Mandell has joined #openstack | 00:13 | |
*** nijaba has quit IRC | 00:14 | |
*** nijaba has joined #openstack | 00:16 | |
*** nijaba has joined #openstack | 00:16 | |
*** adjohn has quit IRC | 00:17 | |
*** joearnold has quit IRC | 00:20 | |
*** nijaba has quit IRC | 00:22 | |
*** nijaba has joined #openstack | 00:23 | |
*** nijaba has joined #openstack | 00:23 | |
kpepple | jinjax: start here http://wiki.openstack.org/XenServerDevelopment | 00:29 |
*** nijaba has quit IRC | 00:29 | |
*** msinhore has quit IRC | 00:30 | |
*** msinhore1 has joined #openstack | 00:30 | |
*** clauden_ has joined #openstack | 00:30 | |
*** nijaba has joined #openstack | 00:31 | |
*** nijaba has joined #openstack | 00:31 | |
*** littleidea has quit IRC | 00:32 | |
*** ccc11 has joined #openstack | 00:34 | |
*** nijaba has quit IRC | 00:35 | |
*** nijaba has joined #openstack | 00:36 | |
*** bluetux has joined #openstack | 00:36 | |
*** clauden_ has quit IRC | 00:36 | |
*** msinhore has joined #openstack | 00:37 | |
*** msinhore1 has quit IRC | 00:37 | |
*** nijaba has quit IRC | 00:46 | |
*** nijaba has joined #openstack | 00:46 | |
*** nijaba has joined #openstack | 00:46 | |
*** vladimir3p_ has quit IRC | 00:47 | |
*** msinhore has quit IRC | 00:48 | |
*** mszilagyi has quit IRC | 00:49 | |
*** dirakx1 has joined #openstack | 00:52 | |
*** kashyap has joined #openstack | 00:56 | |
*** jdurgin has quit IRC | 01:00 | |
*** HugoKuo has quit IRC | 01:00 | |
*** HugoKuo has joined #openstack | 01:07 | |
*** ohnoimdead has quit IRC | 01:07 | |
HugoKuo | morning | 01:07 |
*** jakedahn has quit IRC | 01:12 | |
*** lorin1 has quit IRC | 01:13 | |
*** lorin1 has joined #openstack | 01:14 | |
*** nijaba has quit IRC | 01:15 | |
*** nijaba has joined #openstack | 01:15 | |
*** nijaba has joined #openstack | 01:15 | |
*** vodanh86 has joined #openstack | 01:18 | |
vodanh86 | as i read about networing options in cactus doc : "In both flat modes, the network nodes do not act as a default gateway. Instances are given public IP addresses." | 01:22 |
vodanh86 | is it correct? | 01:22 |
*** jinjax has quit IRC | 01:27 | |
*** nijaba has quit IRC | 01:28 | |
*** nijaba has joined #openstack | 01:28 | |
*** nijaba has joined #openstack | 01:28 | |
HugoKuo | I'm not agree with that | 01:29 |
*** jeffjapan has joined #openstack | 01:30 | |
*** dysinger has joined #openstack | 01:30 | |
vodanh86 | as I understand, in FLATDHCP mode, nova-network host ac as gateway, but how about FLAT mode? | 01:31 |
dysinger | question: if I have a client that wants to hack on openstack code & they want to override some of swifts code, can they just deploy a python egg that would over-ride ubuntu package python code for swift? Is it possible to just put the egg code in the path first before starting services ? I am no python class-loader expert. | 01:32 |
HugoKuo | vodanh86 , in my knowing | 01:33 |
HugoKuo | the difference between flat & flatDHCP | 01:33 |
HugoKuo | is the way to assign ip for instance | 01:33 |
*** nijaba has quit IRC | 01:34 | |
*** nijaba has joined #openstack | 01:35 | |
*** nijaba has joined #openstack | 01:35 | |
*** cereal_bars has quit IRC | 01:38 | |
*** nijaba has quit IRC | 01:38 | |
*** nijaba has joined #openstack | 01:38 | |
*** nijaba has joined #openstack | 01:38 | |
*** Mandell has quit IRC | 01:41 | |
*** iammartian has joined #openstack | 01:44 | |
*** koolhead171 has joined #openstack | 01:45 | |
*** shang has joined #openstack | 01:55 | |
*** koolhead171 has quit IRC | 01:59 | |
*** skraps has quit IRC | 02:02 | |
*** berto- has quit IRC | 02:02 | |
*** Mandell has joined #openstack | 02:05 | |
*** ziyadb_ has joined #openstack | 02:11 | |
*** ziyadb_ has joined #openstack | 02:11 | |
*** ziyadb has quit IRC | 02:12 | |
notmyname | dysinger: that should work. depending on what part of swift you want to override, you may simply be able to do it with your own middleware | 02:16 |
dysinger | notmyname: thanks | 02:17 |
notmyname | dysinger: but essentially, Python uses a path system like a shell and the first package it finds is the one it uses. the shell var to control this is PYTHONPATH | 02:17 |
johnpur | is anyone running swift on 11.04? i am seeing some wierd xfs timeout errors... | 02:17 |
notmyname | johnpur: sorry I didn't answer you earlier. I don't know. we're using LTS | 02:18 |
dysinger | notmyname: sounds like ruby, java, erlang, etc - it's all the same - first code in the path that matches wins | 02:18 |
notmyname | dysinger: indeed | 02:18 |
johnpur | notmyname: I know! Did you find any XFS wirdness in getting the LTS version to work? | 02:19 |
johnpur | notmyname: the system is throwing timeount eerors on the sync process | 02:19 |
notmyname | johnpur: can you paste a traceback? | 02:20 |
johnpur | notmyname: not tonight, but soon | 02:20 |
notmyname | johnpur: but you're sure it's xfs? | 02:20 |
notmyname | (ie not swift throwing the timeouts) | 02:21 |
johnpur | the error is: INFO: task xfssyncd/sdb2/6694 blocked for more than 120 seconds | 02:22 |
notmyname | johnpur: not sure. perhaps tomorrow we can bug letterj or pandemicsyn about it. btw, cw is also an expert on xfs :-) | 02:23 |
johnpur | notmyname: sounds good... i am wondering if xfs is sensitive to ubuntu versions... a lot of testing has been done on 10.04, not so much on 11.04... | 02:25 |
notmyname | and people wonder why we run LTS ;-) | 02:25 |
johnpur | LOL | 02:26 |
johnpur | notmyname: Sent you my contact info, let meknow if you don't get it | 02:28 |
*** [1]RickB17 has joined #openstack | 02:29 | |
notmyname | johnpur: got it | 02:29 |
johnpur | notmyname: cool! | 02:29 |
*** jakedahn has joined #openstack | 02:29 | |
*** jakedahn has quit IRC | 02:29 | |
*** jakedahn has joined #openstack | 02:29 | |
*** ziyadb_ has quit IRC | 02:33 | |
*** lborda has joined #openstack | 02:36 | |
*** skraps has joined #openstack | 02:36 | |
*** adjohn has joined #openstack | 02:39 | |
*** kashyap has quit IRC | 02:42 | |
*** iammartian has quit IRC | 02:47 | |
*** gaveen has joined #openstack | 02:50 | |
*** johnpur has quit IRC | 02:50 | |
uvirtbot | New bug: #806288 in nova "Unable to ping or ssh Fedora 15 instance" [Undecided,New] https://launchpad.net/bugs/806288 | 02:51 |
*** j05h has quit IRC | 02:53 | |
uvirtbot | New bug: #806289 in nova "Error launching RHEL 6 instance" [Undecided,New] https://launchpad.net/bugs/806289 | 02:56 |
*** osier has joined #openstack | 02:58 | |
*** PeteDaGuru has quit IRC | 03:00 | |
*** lmi563 has joined #openstack | 03:01 | |
*** kashyap has joined #openstack | 03:08 | |
*** gohko has quit IRC | 03:14 | |
*** aliguori has quit IRC | 03:16 | |
*** openpercept has joined #openstack | 03:18 | |
*** [1]RickB17 has quit IRC | 03:19 | |
*** mdomsch has joined #openstack | 03:20 | |
*** kashyap has quit IRC | 03:20 | |
*** dgags has quit IRC | 03:22 | |
*** ewindisch has quit IRC | 03:22 | |
*** kashyap has joined #openstack | 03:22 | |
*** saju_m has quit IRC | 03:29 | |
*** toytoy has joined #openstack | 03:30 | |
*** toytoy has quit IRC | 03:35 | |
*** toytoy has joined #openstack | 03:35 | |
*** toytoy has joined #openstack | 03:35 | |
*** toytoy has quit IRC | 03:36 | |
*** toytoy has joined #openstack | 03:36 | |
*** GeoDud has quit IRC | 03:36 | |
*** lorin1 has left #openstack | 03:39 | |
*** lborda has quit IRC | 03:40 | |
*** lmi563 has left #openstack | 03:41 | |
*** deepa has quit IRC | 03:42 | |
*** AimanA is now known as HouseAway | 03:44 | |
*** deepa has joined #openstack | 03:46 | |
*** jakedahn has quit IRC | 03:49 | |
*** mihgen has joined #openstack | 03:50 | |
*** rchavik has joined #openstack | 03:51 | |
HugoKuo | hi guys , | 03:52 |
HugoKuo | Is there any better way to update DB while upgrade nova ? | 03:52 |
*** mszilagyi has joined #openstack | 03:52 | |
HugoKuo | from Cactus to Trunk ? | 03:53 |
*** jakedahn has joined #openstack | 03:56 | |
*** Ephur has joined #openstack | 04:02 | |
*** katkee has joined #openstack | 04:04 | |
*** osier has quit IRC | 04:07 | |
*** katkee has quit IRC | 04:08 | |
*** katkee has joined #openstack | 04:10 | |
*** MarkAtwood has quit IRC | 04:12 | |
*** Ephur has quit IRC | 04:13 | |
*** negronjl_ has joined #openstack | 04:13 | |
*** Ephur has joined #openstack | 04:13 | |
*** negronjl has quit IRC | 04:15 | |
*** deepest has joined #openstack | 04:17 | |
deepest | Hi everyone | 04:17 |
deepest | Nice to see you all | 04:18 |
deepest | I'm writting from Vietnam | 04:18 |
deepest | I have something to ask you guy about Openstack | 04:18 |
deepest | If possible, please give me some suggestion | 04:18 |
*** MarkAtwood has joined #openstack | 04:18 | |
deepest | I'm a beginner in OpenStack | 04:19 |
deepest | I want to know about Storage | 04:19 |
deepest | using solaris | 04:19 |
deepest | anyone can support me? | 04:20 |
*** jakedahn_ has joined #openstack | 04:24 | |
*** jakedahn has quit IRC | 04:28 | |
*** jakedahn_ is now known as jakedahn | 04:28 | |
kpepple | deepest: you want to use solaris with nova-volume or swift ? | 04:28 |
deepest | I want to use solaris with nova-volume | 04:29 |
deepest | do you have any document or source code to describe about that? | 04:30 |
deepest | in my company now | 04:30 |
deepest | we have a server with solaris | 04:31 |
deepest | and I want to use this server with solaris for nova-volume | 04:31 |
deepest | what do I need to do? | 04:31 |
deepest | and How can I do this job? | 04:32 |
kpepple | deepest: there is a SolarisISCSDriver in nova/volume/san.py that should help you. follow the directions in that file at line 144. | 04:33 |
deepest | where can I see this file? | 04:34 |
kpepple | deepest: if you've installed from the source code, it's in trunk/nova/volume/san.py. If you installed from package, it's probably where ever they installed the nova egg. | 04:35 |
kpepple | deepest: let me see if I can find it online for you | 04:35 |
deepest | ok thank you so much | 04:36 |
*** jakedahn has left #openstack | 04:37 | |
kpepple | deepest: read this file (http://bazaar.launchpad.net/~hudson-openstack/nova/trunk/view/head:/nova/volume/san.py) starting at line 144 for instructions on setting up the Solaris server | 04:38 |
deepest | thank you, Kpepple | 04:41 |
deepest | do you have an email address? | 04:41 |
kpepple | no problem | 04:42 |
deepest | could you give me? If I have something to ask, I will send mail to you | 04:42 |
kpepple | deepest: my contact info is at http://ken.pepple.info/about/ | 04:45 |
deepest | as we know, we must to install nova-volume with LVM (logical volume manager)? | 04:45 |
deepest | and now, if I want to use solaris then where can I install nova-volume? | 04:46 |
deepest | in solaris server or in any host | 04:46 |
kpepple | deepest: i think you need to install it on another host (it uses ssh to contact the solaris iSCSI server) | 04:47 |
deepest | and about Solaris Server | 04:48 |
deepest | do they need LVM to contact to nova-volume? | 04:48 |
kpepple | deepest: i don't think they need LVM | 04:48 |
*** mszilagyi has quit IRC | 04:50 | |
deepest | ok | 04:50 |
deepest | I think my acknowledge is not enough | 04:51 |
deepest | so, do you have any document to describe more details about Storage? | 04:51 |
deepest | I read the tutorial file " compute adminguide cactus" and saw some information about LVM | 04:53 |
kpepple | deepest: the best tutorial i've read is http://www.dubsquared.com/?p=120 but that does not talk about Solaris | 04:53 |
deepest | and they said that "To set up Compute to use volumes, ensure that nova-volume is installed along with lvm2." | 04:53 |
deepest | thanks | 04:54 |
deepest | but I think, the firstly I must understand deeply in Storage | 04:54 |
deepest | and after that I can change to Solaris | 04:55 |
kpepple | deepest: yes, it is best to start with something easier like linux then try solaris | 04:55 |
deepest | yep | 04:56 |
deepest | I will try my best to do this job | 04:56 |
deepest | in the future, I think I will have many question. I will send email to you,If possible, please support me. | 04:57 |
*** f4m8_ is now known as f4m8 | 04:57 | |
*** Mandell has quit IRC | 05:00 | |
kpepple | deepest: good luck | 05:00 |
*** vladimir3p_ has joined #openstack | 05:07 | |
deepest | thank you | 05:08 |
*** Espenfjo has quit IRC | 05:09 | |
*** shentonfreude has joined #openstack | 05:27 | |
*** vladimir3p_ has quit IRC | 05:31 | |
*** miclorb_ has quit IRC | 05:39 | |
*** miclorb has joined #openstack | 05:39 | |
*** ewindisch has joined #openstack | 05:52 | |
*** dirakx1 has quit IRC | 05:56 | |
*** ewindisch has quit IRC | 05:56 | |
*** ewindisch has joined #openstack | 06:00 | |
*** ccc11 has quit IRC | 06:09 | |
*** katkee has quit IRC | 06:12 | |
*** ewindisch has quit IRC | 06:18 | |
*** mihgen has quit IRC | 06:23 | |
*** mihgen has joined #openstack | 06:23 | |
*** gohko has joined #openstack | 06:29 | |
*** wcang has joined #openstack | 06:32 | |
*** Jedicus2 has joined #openstack | 06:35 | |
*** po has joined #openstack | 06:35 | |
*** Jedicus has quit IRC | 06:36 | |
*** berto- has joined #openstack | 06:40 | |
*** dragondm has joined #openstack | 06:45 | |
*** mihgen has quit IRC | 06:45 | |
uvirtbot | New bug: #806338 in nova "recursive zone calls hang when at least 1 child API call hangs" [Undecided,Confirmed] https://launchpad.net/bugs/806338 | 06:46 |
*** saju_m has joined #openstack | 06:47 | |
*** MarkAtwood has quit IRC | 06:48 | |
*** murkk has quit IRC | 06:58 | |
*** berto- has quit IRC | 06:59 | |
*** murkk has joined #openstack | 07:01 | |
*** reidrac has joined #openstack | 07:02 | |
*** reidrac has left #openstack | 07:03 | |
*** reidrac has joined #openstack | 07:03 | |
*** reidrac has left #openstack | 07:03 | |
*** berto- has joined #openstack | 07:03 | |
*** reidrac has joined #openstack | 07:04 | |
*** zenmatt has quit IRC | 07:04 | |
*** Ephur has quit IRC | 07:06 | |
*** skraps has quit IRC | 07:08 | |
uvirtbot | New bug: #806340 in nova "All VLANs/bridges created even if it's unnecessary " [Undecided,New] https://launchpad.net/bugs/806340 | 07:11 |
*** BK_man has quit IRC | 07:13 | |
*** BK_man_ is now known as BK_man | 07:13 | |
*** berto- has quit IRC | 07:21 | |
*** willaerk has joined #openstack | 07:22 | |
saju_m | jpgeek1: now both ping and SSH not working http://paste.openstack.org/show/1843/ | 07:25 |
saju_m | ping to instance was working before flat-network configuration. | 07:26 |
*** npmapn has joined #openstack | 07:31 | |
*** mancdaz1203 has quit IRC | 07:32 | |
*** mancdaz has joined #openstack | 07:33 | |
saju_m | jpgeek1: hi, r u busy ? | 07:38 |
*** ccc11 has joined #openstack | 07:40 | |
*** duffman has quit IRC | 07:40 | |
*** duffman has joined #openstack | 07:40 | |
vodanh86 | i plan to deploy as in the picture https://skydrive.live.com/#!/?cid=a996ef8137167635&sc=documents&uc=1&id=A996EF8137167635!166!cid=A996EF8137167635&id=A996EF8137167635!216&sc=documents | 07:45 |
vodanh86 | is it posible? | 07:45 |
*** nacx has joined #openstack | 07:48 | |
*** saju_m has quit IRC | 07:50 | |
*** dobber has joined #openstack | 07:54 | |
*** ibarrera has joined #openstack | 07:58 | |
*** katkee has joined #openstack | 08:01 | |
*** berto- has joined #openstack | 08:03 | |
*** mancdaz has quit IRC | 08:04 | |
*** saju_m has joined #openstack | 08:05 | |
*** mancdaz has joined #openstack | 08:05 | |
*** saju_m has quit IRC | 08:06 | |
*** nagyz has joined #openstack | 08:06 | |
nagyz | hi | 08:06 |
nagyz | I have a two-compute node setup. At the first node, everything is OK, but at the second, the ubuntu cloud image behaves like it hasn't got an IP. however, both in db and in the tools, I see the assigned IP. I'm using flat manager. | 08:06 |
nagyz | any ideas? | 08:06 |
mattt | nope, but i have a question ... w/ a two compute-node setup, does each compute node need to run nova-compute? | 08:12 |
nagyz | yes | 08:12 |
nagyz | nova-compute IS the compute component. | 08:12 |
mattt | yeah, that's what i figured, until i read some answers saying that nova-compute does not need to run on the dom0, which threw me off. | 08:12 |
nagyz | I don't know about Xen, but I guess it should be the same with KVM: it should run on each compute node. | 08:13 |
nagyz | hence the name. :) | 08:13 |
*** nacx has quit IRC | 08:13 | |
*** ibarrera is now known as nacx | 08:13 | |
*** saju_m has joined #openstack | 08:14 | |
mattt | nagyz: https://answers.launchpad.net/nova/+question/149590 | 08:17 |
*** jeffjapan has quit IRC | 08:17 | |
mattt | nagyz: think i misunderstood what was asked and said when i read it late last night :) | 08:17 |
*** dragondm has quit IRC | 08:19 | |
nagyz | ok | 08:19 |
nagyz | it sais it needs to run on each physical host in dom0 | 08:20 |
nagyz | obviously. :) | 08:20 |
*** deepest has quit IRC | 08:21 | |
*** saju_m has quit IRC | 08:25 | |
*** Vasichkin has joined #openstack | 08:30 | |
nagyz | hm, checked /etc/network/interfaces inside the image, and that seems ok | 08:33 |
nagyz | and I can't ping a VM on compute node 1 from compute node 2 | 08:35 |
nagyz | :( | 08:35 |
*** bluetux has quit IRC | 08:35 | |
nagyz | with tcpdump I see the ARP request arriving in the vnet interface belonging to the VM | 08:35 |
nagyz | ah, there's a mention of an iptables prerouting DNAT | 08:38 |
nagyz | that doesn't change a thing. | 08:41 |
*** circloud has joined #openstack | 08:49 | |
*** chemikadze has quit IRC | 08:55 | |
nagyz | hm, with a natty image, it isnt even booting | 08:57 |
*** chemikadze has joined #openstack | 08:57 | |
doude_ | Hi all, I get an error with one of my nova-compute nodes. When it tries to fetch image from Glance, Glance responds with 300 http code: http://paste.openstack.org/show/1845/ and nova-compute fails with this log: http://paste.openstack.org/show/1846/ | 08:57 |
nagyz | doude_, I'm trying to set up my second compute node. did you have any problems with networking? | 08:57 |
doude_ | it's strange, I get this error only with this node. With others nodes, it works nicely. I use he same version (rev 1244) on all nodes | 08:59 |
doude_ | nagyz: no networking problem. Which network mode do you use ? | 08:59 |
*** dobber has quit IRC | 09:00 | |
nagyz | flatmanager | 09:00 |
nagyz | and according to all the logs and the injected interfaces file, the VMs on the second compute node is ok | 09:01 |
nagyz | but in reality, they don't work | 09:01 |
doude_ | nagyz: I use VLAN mode. What's your problem ? | 09:01 |
nagyz | the VM starts up, the interfaces file looks good, populated with the IP | 09:01 |
nagyz | however, the VM stops booting and starts throwing metadata timeout messages | 09:01 |
nagyz | http://docs.openstack.org/cactus/openstack-compute/admin/content/configuring-multiple-compute-nodes.html | 09:02 |
nagyz | here they mention I should do a DNAT for UEC images | 09:02 |
nagyz | but I did that, and no packet gets caught in the rule | 09:02 |
doude_ | nagyz: do you have boot log of your VM ? | 09:02 |
nagyz | let me start up one | 09:02 |
doude_ | nagyz: ok | 09:03 |
nagyz | well, now I don't see the timeout messages, but I see this: | 09:04 |
nagyz | init: plymouth main process (48) killed by SEGV signal | 09:04 |
nagyz | init: plymouth-splash main process (296) terminated with status 2 | 09:04 |
nagyz | https://answers.launchpad.net/nova/+question/145062 | 09:05 |
nagyz | this seems very similar | 09:05 |
*** dobber has joined #openstack | 09:06 | |
nagyz | should I switch to FlatDHCPManager? | 09:06 |
*** morfeas has quit IRC | 09:11 | |
doude_ | nagyz: your VM (uec image) try to acces the matadata on IP 169.254.169.254:80. You must forward the of this IP to ec2_dmz_host:ec2_port of your IaaS. In flatDHCP and VLAN mode, an iptables rules is automaticaly added, but in flat mode it isn't done. So you must do it by yourself | 09:12 |
nagyz | I've added it per instructions of the linked docs | 09:13 |
nagyz | but with iptables -t nat -L -v -n, I see zero packets on that rule | 09:13 |
nagyz | ah, it should be done on the default gw | 09:14 |
*** miclorb has quit IRC | 09:14 | |
nagyz | not on the compute node | 09:14 |
nagyz | ok, added on the default gw, still no hits on the rule | 09:16 |
*** mancdaz has quit IRC | 09:18 | |
doude_ | nagyz: yes you must add this rule on each compute nodes | 09:19 |
doude_ | nagyz: in flat mode | 09:19 |
*** mancdaz has joined #openstack | 09:19 | |
dsockwell | win 27 | 09:21 |
nagyz | I've added it on the gw the VMs are using | 09:21 |
nagyz | there's no difference sadly | 09:22 |
nagyz | actually, all my console.logs look like that, even if it's working... | 09:22 |
nagyz | starnge. | 09:22 |
nagyz | at least I can ping those, but nothing else | 09:23 |
*** jpgeek1 has quit IRC | 09:24 | |
*** 13WAAEE7D has joined #openstack | 09:24 | |
doude_ | nagyz: where is your nova-api server ? Behind your gateway ? | 09:24 |
nagyz | we're all in the same net :) 10.1.1.1 is the gw, .3 is cn1, .4 is cn2 | 09:24 |
nagyz | the VMs are using 10.1.1.128/25 | 09:25 |
*** ccc11 has quit IRC | 09:26 | |
*** Capashen has joined #openstack | 09:28 | |
nagyz | i've opened up vnc on the vm, and all I get is a blinking cursor :( | 09:33 |
nagyz | it used to work with one compute node, now even that doesn't work... :) | 09:33 |
*** berto- has quit IRC | 09:35 | |
doude_ | nagyz: so the compute nodes and api server are on on the same subnet ? | 09:36 |
nagyz | yes | 09:36 |
nagyz | as I told you, already is in 10.1.1.0/24 | 09:36 |
nagyz | let me do a clean install... | 09:37 |
doude_ | nagyz: I think you must set the NAT rule on all node | 09:37 |
nagyz | I already did | 09:38 |
nagyz | no packet matched the rule | 09:38 |
nagyz | set it up on the second compute node and the default gw | 09:38 |
nagyz | didn't change a thing | 09:38 |
*** KAM has joined #openstack | 09:39 | |
*** irahgel has joined #openstack | 09:40 | |
KAM | Is there a way to execute commands on a running server using open stack ? | 09:42 |
soren | What? | 09:43 |
KAM | does 'what' means that there's not ? | 09:44 |
nagyz | huh? | 09:44 |
*** BMRU has joined #openstack | 09:45 | |
KAM | I want to execute certain scripts on a running machine using open stack, Is this possible ? ( without using a server running inside the machine ) | 09:46 |
KAM | s/machine/server | 09:47 |
*** Deirz- has joined #openstack | 09:47 | |
*** mgoldmann has joined #openstack | 09:49 | |
soren | KAM: I think you're looking for something like Puppet. | 09:50 |
*** MarcMorata has joined #openstack | 09:51 | |
doude_ | nagyz: I don't understand your problem. The VM boot ? | 09:52 |
nagyz | I'm reinstalling the compute node now, and starting from scratch. | 09:52 |
nagyz | but yes, the VMs do boot, I can ping it, but nothing happens except that plymouth segfaulted. | 09:53 |
*** BMRU has left #openstack | 09:53 | |
KAM | soren: puppet requires an internal server to run inside the VM, no ? | 09:54 |
soren | KAM: This is the first time you mention VM's. | 09:54 |
KAM | sorry, I meant server | 09:55 |
nagyz | KAM, what you're saying doesn't even makes sense. | 09:56 |
nagyz | try to rephrase it if you expect answers. :) | 09:56 |
soren | KAM: I think you need to start over and explain whatyou're trying to do. | 09:56 |
KAM | Ok, I'm trying to execute certain scripts on a running server ( instance ) using open stack. | 09:57 |
KAM | Is it clear now ? | 09:58 |
soren | Not really. | 09:58 |
soren | Why don't you just... you know... do it? | 09:58 |
soren | If I want to run a command I just... you know... run it. | 09:59 |
soren | Now, if I have A MILLION machines on which I want to do this... Then it's a different story. | 09:59 |
soren | ...and this brings us back to the "explain what you're trying to do" bit. | 09:59 |
*** adjohn has quit IRC | 10:00 | |
Deirz- | hello there. I set up nova on two machines: On first runs api, network, scheduler, compute and glance. On second - only compute. I put into nova conf these lines "--glance_host=172.16.8.203 --image_service=nova.image.glance.GlanceImageService", and on first machine it works as intented, but on second `euca-describe-images` says "no route to host" | 10:00 |
KAM | ah, I meant I'm launching this server ( instance ) for somebody else, then while he is working I want to execute some scripts on the background. | 10:00 |
KAM | Without me actually accessing the machine. | 10:00 |
soren | KAM: So why don't you? | 10:00 |
soren | Aha! | 10:00 |
KAM | s/machine/server | 10:00 |
soren | Ok, so I'll go back to my original answer: Puppet. | 10:01 |
Deirz- | but `wget 172.16.8.203:9191/images -q -O -`, invoked on second machine, shows me a list of images | 10:02 |
KAM | soren: Yup, but that requires other software to be installed on the server. Something that actually listens puppet, no ? | 10:02 |
dsockwell | puppet connects out | 10:03 |
dsockwell | KAM: if you were using openvz you might use vzctl to inject commands/processes into the machine | 10:03 |
dsockwell | what's your goal? what do these scripts do? | 10:03 |
soren | KAM: Yes. | 10:03 |
soren | KAM: Nothing you've said suggests that this is a problem. | 10:04 |
soren | KAM: Erl, well, as dsockwell says, puppet connects out (usually). But it needs to run inside the vm. | 10:04 |
dsockwell | you could put a key into /root/.ssh/authorized_keys to allow yourself root access to the vm | 10:05 |
dsockwell | that would be part of the image file, unless that's particularly forbidden | 10:05 |
KAM | dsockwell: Just executing monitoring scripts based on what the user is doing. | 10:05 |
dsockwell | you could inject the scripts into the image using init.d or rc.local | 10:06 |
dsockwell | hey, does any part of a typical compute node require direct access to the database that nova-api keeps? | 10:08 |
dsockwell | that would be a silly thing to allow, wouldn't it? | 10:08 |
*** saju_m has joined #openstack | 10:13 | |
*** wcang has quit IRC | 10:18 | |
*** chemikadze has quit IRC | 10:22 | |
*** circloud has quit IRC | 10:23 | |
*** circloud has joined #openstack | 10:25 | |
*** chemikadze has joined #openstack | 10:28 | |
*** dobber has quit IRC | 10:32 | |
*** toytoy has quit IRC | 10:37 | |
*** duffman has quit IRC | 10:47 | |
*** duffman has joined #openstack | 10:48 | |
*** adjohn has joined #openstack | 10:58 | |
*** dobber has joined #openstack | 11:03 | |
*** toytoy has joined #openstack | 11:04 | |
*** 13WAAEE7D has quit IRC | 11:07 | |
*** toytoy has quit IRC | 11:09 | |
*** miclorb_ has joined #openstack | 11:13 | |
*** adjohn has quit IRC | 11:25 | |
soren | dsockwell: It does require that, actually. For now, at least. | 11:30 |
*** ctennis has quit IRC | 11:37 | |
freeflying | can I just use public IP for multiple node installation of nova? | 11:42 |
*** ziyadb has joined #openstack | 11:47 | |
*** ziyadb has joined #openstack | 11:47 | |
ziyadb | morning | 11:48 |
*** ctennis has joined #openstack | 11:50 | |
*** ctennis has joined #openstack | 11:50 | |
mpatel | I am having issues with virbr0 interface and seems SNAT is not working properly and breaking network connections | 11:51 |
nagyz | when I try to query http://169.254.169.254/2009-04-04/meta-data/instance-id from the second compute node, it gives me a python exception, saying 'no floating ip for address address_of_my_compute_node' | 11:53 |
nagyz | do I need to define floating IPs? | 11:53 |
*** katkee has quit IRC | 11:55 | |
nagyz | I don't want to use floating IPs... | 12:00 |
*** PeteDaGuru has joined #openstack | 12:00 | |
vodanh86 | nagyz no, if you query above url from any compute node, exception will occurs | 12:01 |
nagyz | https://answers.launchpad.net/nova/+question/159317 | 12:01 |
nagyz | this is my exact same problem | 12:01 |
*** markvoelker has joined #openstack | 12:01 | |
vodanh86 | show your compute node ifconfig | 12:02 |
vodanh86 | and "ip addr" | 12:03 |
nagyz | http://pastebin.com/DBYJ6zc7 | 12:04 |
nagyz | the other compute node is 10.1.1.3, it's running nova-api too | 12:04 |
nagyz | and nova-network, and nova-scheduler | 12:04 |
nagyz | this is how the console looks like for the affected VM: http://pastebin.com/PfivLnPi | 12:05 |
*** msivanes has joined #openstack | 12:05 | |
nagyz | the VM's default gw is .129, which is an IP on the other compute node, and the other compute node has the iptables prerouting rule | 12:06 |
nagyz | done automatically by nova-network, I guess | 12:06 |
*** Krooks has joined #openstack | 12:07 | |
vodanh86 | does you enable ipv4 forward? | 12:07 |
nagyz | since vnet0 is part of the bridge, why should I? | 12:08 |
nagyz | at least on cn2 | 12:08 |
nagyz | on cn1, I didn't touch anything; nova-network handles that | 12:08 |
*** Linex has quit IRC | 12:08 | |
mpatel | the network gets messed up when the instance is started | 12:10 |
nagyz | doesn't qemu assign vnet0 to it? | 12:10 |
*** ton_katsu has quit IRC | 12:10 | |
nagyz | interesting, if I do a tcpdump on vnet0, I see ARP requests for 10.1.1.129 | 12:11 |
nagyz | which is the default gw for the VM, and one of the IPs of the other compute node | 12:11 |
nagyz | why can't it find it using ARP? | 12:11 |
nagyz | on br100, I see the arp reply | 12:11 |
nagyz | but not on vnet0... | 12:12 |
*** j05h has joined #openstack | 12:17 | |
nagyz | has anyone seen something like that? | 12:21 |
doude_ | Hi, how I can install the last version (2.5.7) of python-novaclient over my distrib version (2.4.2) ? | 12:23 |
doude_ | I get las code from launchpad. If I install it with (python setup.py install), I have the 2 versions on my system | 12:24 |
*** katkee has joined #openstack | 12:30 | |
*** mies has quit IRC | 12:30 | |
doude_ | Ok I solve my problem | 12:30 |
doude_ | but I got another one | 12:31 |
*** ccc11 has joined #openstack | 12:32 | |
doude_ | I just create an admin user and I try toask the OSAPI with his credentials (nova list for example), but I get this error: http://paste.openstack.org/show/1847/ | 12:33 |
*** chetan has joined #openstack | 12:34 | |
*** miclorb_ has quit IRC | 12:34 | |
doude_ | in nova-api logs, I can see successfully authentication | 12:34 |
chetan | how to save instance to relaunch its as seperate VM in openstack ? | 12:35 |
nagyz | this whole networking mess is so annoying. | 12:36 |
chetan | Or EBS bootable images possible in OPenstack ? | 12:36 |
*** aliguori has joined #openstack | 12:36 | |
mpatel | I am also having problem with network | 12:37 |
nagyz | the VM on the second compute node doesn't get any ARP replies | 12:37 |
nagyz | so it can't talk to the metadata server | 12:37 |
nagyz | so it won't boot. | 12:37 |
mpatel | virbr0 breaks all the network connectivity | 12:37 |
mpatel | yes same here it seems the SNAT rules are broken in iptables | 12:38 |
nagyz | I'm talking about ARP. | 12:38 |
*** dysinger has quit IRC | 12:38 | |
nagyz | that has nothing to do with SNAT/DNAT | 12:38 |
*** Eyk^off is now known as Eyk | 12:40 | |
*** gaveen has quit IRC | 12:40 | |
*** HugoKuo_ has joined #openstack | 12:44 | |
*** vodanh86 has quit IRC | 12:45 | |
*** HugoKuo has quit IRC | 12:48 | |
*** wcang has joined #openstack | 12:50 | |
*** hggdh has quit IRC | 12:53 | |
*** hggdh has joined #openstack | 12:55 | |
*** shentonfreude has quit IRC | 12:59 | |
*** marrusl has joined #openstack | 13:02 | |
*** ziyadb has quit IRC | 13:03 | |
*** mdomsch has quit IRC | 13:06 | |
*** ziyadb has joined #openstack | 13:07 | |
*** ziyadb has joined #openstack | 13:07 | |
*** fulc has joined #openstack | 13:09 | |
*** jaypipes has joined #openstack | 13:13 | |
*** circloud has quit IRC | 13:14 | |
*** ccc11 has quit IRC | 13:16 | |
*** circloud has joined #openstack | 13:17 | |
*** ccc11 has joined #openstack | 13:17 | |
nagyz | so for the second compute node, in case of the default vlan networking, should I add a network address to it's bridge? | 13:19 |
nagyz | or shouldnt that be done automatically? | 13:19 |
soren | Whatever needs to happen will happen automatically. | 13:21 |
nagyz | in the instance I keep seeing 13:18:42 [ 2/100]: url error [[Errno 101] Network is unreachable] | 13:21 |
soren | When I say "will" I of course mean "should". | 13:23 |
soren | :) | 13:23 |
nagyz | :) | 13:23 |
nagyz | I see vnet0 being part of the bridge, automatically created (this is on the second node, so there's no nova-network running here) | 13:24 |
soren | Right. | 13:24 |
*** kbringard has joined #openstack | 13:24 | |
nagyz | how could I find out what's wrong in this case? | 13:25 |
nagyz | since in this case it's set to dhcp, it could be that it doesn't see the dhcp reply/cant make the request | 13:26 |
nagyz | that would casue a network unreachable error | 13:26 |
*** kbringard has quit IRC | 13:26 | |
*** kbringard has joined #openstack | 13:27 | |
nagyz | how could I verify that the second compute node indeed sees the traffic in the vlan? | 13:28 |
*** bastichelaar has joined #openstack | 13:28 | |
nagyz | is it OK to assign an IP from the same subnet to br100 on it? | 13:28 |
kbringard | you could just tcpdump it | 13:28 |
bastichelaar | is anyone using openstack with lxc? | 13:29 |
kbringard | DHCP is broadcast, so if the machine you're running tcpdump on is in the same broadcast domain it should see the traffic | 13:30 |
*** evil_e_ has joined #openstack | 13:30 | |
nagyz | kbringard, let me give it a try. | 13:30 |
nagyz | so on the second node, it's ok to tcpdump br100 :) | 13:30 |
kbringard | should be | 13:30 |
nagyz | lets see | 13:30 |
kbringard | tcpdump -i br100 -vvv -s 1500 port 67 or port 68... I think that'll do it | 13:31 |
fulc | Hello, I just managed to install a dualnode configuration using StackOps. The smart installer is at http://address:8888/, but when I try using the https://address:8773/ to actually have it running, i get this error (Error code: ssl_error_rx_record_too_long) | 13:31 |
kbringard | oh, it looks like there is a program called dhcpdump too | 13:31 |
nagyz | kbringard, right after reboot if I don't restart nova-network manually, it gives me an error message when the scheduler first wants to start a VM | 13:32 |
nagyz | is that normal? :p | 13:32 |
kbringard | hmmm | 13:32 |
nagyz | after restart, it's ok. | 13:32 |
kbringard | doesn't seem like it | 13:32 |
*** foxtrotdelta has joined #openstack | 13:32 | |
nagyz | right now I have eth0 without any bridge or anything on both hosts with 10.1.1.0/24 ips, and eth1 specified as the vlan interface, and a 10.0.0.0/24 subnet on it | 13:33 |
nagyz | does that sounds correct? | 13:33 |
nagyz | I mean, feasible. | 13:33 |
*** wcang has quit IRC | 13:33 | |
kbringard | yea, totally | 13:33 |
kbringard | that's what I do | 13:33 |
bastichelaar | can anyone help me with some issues with LXC? | 13:33 |
nagyz | I see the DHCP request on br100 on the second compute node; there's a DHCP reply which I see at the FIRST compute node | 13:33 |
kbringard | bastichelaar: sorry, never used LXC | 13:34 |
nagyz | but not at the second one... | 13:34 |
nagyz | hm, wait, the mac addresses doesn't match | 13:34 |
*** msivanes has quit IRC | 13:34 | |
*** PeteDaGuru has quit IRC | 13:34 | |
*** evil_e has quit IRC | 13:34 | |
*** klumpie has quit IRC | 13:34 | |
*** klumpie has joined #openstack | 13:34 | |
nagyz | that means I don't even see the dhcp requests on the first node, running nova-network and the dhcp server | 13:35 |
nagyz | that would mean they aren't in the same vlan? | 13:35 |
kbringard | well, how many networks do you have configured? | 13:35 |
nagyz | only one, 10.0.0.0/24 | 13:35 |
nagyz | I can see it with nova-manage fixed list, and network list | 13:36 |
nagyz | this is the error I see if I don't manually restart nova-network on the second node: http://pastebin.com/cdM6Tszj | 13:36 |
nagyz | and this is what it does after manual restart: http://pastebin.com/84H1bv1w | 13:37 |
kbringard | do you have eth1 up with no IP configured? | 13:37 |
nagyz | hm, doesn't seem so. that causes the first error then. let me add it to /etc/network/interfaces | 13:38 |
kbringard | auto eth1 | 13:38 |
kbringard | iface eth1 inet manual | 13:38 |
kbringard | up ifconfig eth1 up | 13:38 |
*** mies has joined #openstack | 13:38 | |
kbringard | I have that in my /etc/network/interfaces file | 13:38 |
kbringard | it'll bring up eth1 with no IP | 13:38 |
nagyz | ok, let me add it :-) | 13:38 |
kbringard | you'll need to add it to all your compute nodes as well | 13:38 |
kbringard | because if the interface isn't there, the bridge and vlan can't attach to it :-) | 13:39 |
nagyz | that's true :) | 13:39 |
nagyz | as an enhancement, openstack could set it to up state :p | 13:39 |
kbringard | haha, feel free to code it in :-p | 13:39 |
nagyz | actually, I have some enhanced security functionality patches | 13:39 |
kbringard | sweet | 13:40 |
nagyz | but I'm waiting for legal to say if we can release it or not... | 13:40 |
kbringard | doh | 13:40 |
nagyz | we're a big company :) | 13:40 |
nagyz | has been waiting for a month. | 13:40 |
kbringard | it's just text... if it happens to get emailed to someone accidentally.. ;-) | 13:40 |
nagyz | we're releasing a paper proposing additional security to general IaaS clouds | 13:40 |
*** kbringard has left #openstack | 13:40 | |
*** kbringard has quit IRC | 13:41 | |
*** deepa has quit IRC | 13:41 | |
*** deepa has joined #openstack | 13:41 | |
*** kbringard has joined #openstack | 13:41 | |
kbringard | oops | 13:41 |
nagyz | :) | 13:41 |
kbringard | sorry about that | 13:41 |
nagyz | it's ok | 13:41 |
kbringard | I blame the VPN | 13:42 |
*** huslage has joined #openstack | 13:42 | |
*** PeteDaGuru has joined #openstack | 13:43 | |
nagyz | I misspelled the syntax in interfaces, and not it got stuck at mounting the nfs share... | 13:43 |
nagyz | gonna have to fix it with sweet init=/bin/bahs | 13:43 |
kbringard | doh | 13:43 |
huslage | morning. i just found that one of the backing images for several servers has disappeared. coworkers yay. I've recreated it, but it has a different id. Should I just update the DB with the new image's ID for the effected servers? | 13:43 |
kbringard | I hate that | 13:43 |
kbringard | huslage: you shouldn't have to... | 13:43 |
nagyz | disappear? | 13:43 |
*** j05h has quit IRC | 13:44 | |
kbringard | in theory it should download the image again from glance | 13:44 |
huslage | someone deleted it nagyz | 13:44 |
huslage | out of glance | 13:44 |
huslage | it's GONE | 13:44 |
kbringard | and then use the image it just downloaded as the backing image for the instance it's building | 13:44 |
kbringard | oh | 13:44 |
kbringard | out of glance | 13:44 |
*** msivanes has joined #openstack | 13:44 | |
huslage | now i'm seeing: (nova): TRACE: Error: Image 10 could not be found. | 13:44 |
kbringard | you still shouldn't have to worry about it | 13:44 |
*** npmapn has quit IRC | 13:44 | |
huslage | when i try to do anything to those servers | 13:44 |
kbringard | that's weird | 13:44 |
huslage | i'm a little worried that they won't reboot | 13:44 |
kbringard | in theory the running servers should keep running | 13:45 |
huslage | and that would be bad mkay | 13:45 |
kbringard | and new instances should launch, so long as you use the new image id | 13:45 |
huslage | yes kbringard but the old instances will not reboot, am i correct? | 13:45 |
huslage | well i can't snapshot them to back them up at least | 13:45 |
kbringard | hmmm, I'm unsure | 13:45 |
kbringard | I've never had that happen :-) | 13:45 |
huslage | me either :) | 13:46 |
kbringard | is there one that is OK to lose? | 13:46 |
kbringard | try it :-D | 13:46 |
huslage | edge case ftw | 13:46 |
huslage | yes i'll do that | 13:46 |
kbringard | I would think it'd be OK | 13:46 |
*** f4m8 is now known as f4m8_ | 13:46 | |
kbringard | the image is gone from _base as well? | 13:46 |
huslage | no | 13:47 |
huslage | looks like it rebooted fine kbringard | 13:48 |
*** openpercept has quit IRC | 13:48 | |
kbringard | cool, that's good to know | 13:48 |
*** jatsrt has joined #openstack | 13:48 | |
nagyz | ok, so now that eth1 is finally up | 13:48 |
kbringard | I don't see why it wouldn't... there's an autonomous copy running in $instances/instance-$id/ | 13:49 |
nagyz | nova-network doesn't have to be restarted manually | 13:49 |
jatsrt | morning all | 13:49 |
kbringard | nagyz: w00t | 13:49 |
jatsrt | or evening | 13:49 |
jatsrt | been away for a bit working on another project | 13:49 |
*** vladimir3p_ has joined #openstack | 13:49 | |
nagyz | ok, network is still unreachable | 13:49 |
jatsrt | just updated to latest head | 13:49 |
nagyz | be back in 10 | 13:49 |
jatsrt | and I am getting "trying to add VLAN #100 to IF -:None" | 13:49 |
jatsrt | so some conf param changed | 13:49 |
huslage | kbringard: i just can't snapshot it with 'nova image create' | 13:49 |
jatsrt | thought it was vlan_interface to bridge_interface, but that did not seem to work | 13:50 |
jatsrt | any thoughts? | 13:50 |
*** NeCRoManTe has joined #openstack | 13:50 | |
*** openpercept_ has joined #openstack | 13:50 | |
kbringard | huslage: I'm not sure, I don't use that command... I just qemu-img snapshot it | 13:51 |
huslage | kbringard: k | 13:51 |
kbringard | https://github.com/kevinbringard/OpenStack-tools/blob/master/snapshot-instance.bash | 13:51 |
kbringard | that's how I do it | 13:51 |
*** vladimir3p_ has quit IRC | 13:51 | |
kbringard | drop that on the compute node | 13:51 |
kbringard | and run it with -i $instance-id | 13:52 |
kbringard | jatsrt: reading back | 13:52 |
huslage | cute kbringard | 13:52 |
kbringard | jatsrt: hrmm, lemme look through the code and see if I can find it | 13:53 |
jatsrt | http://paste.openstack.org/show/1848/ | 13:53 |
jatsrt | full trace | 13:53 |
jatsrt | this is what I get for taking a couple of weeks off :-) | 13:53 |
kbringard | haha | 13:54 |
kbringard | that'll teach ya | 13:54 |
bastichelaar | openstack and lxc doesnt seem to be quite popular? | 13:54 |
kbringard | what hypervisor are you using? | 13:55 |
jatsrt | me? kvm, but this is on the nova-network node | 13:55 |
kbringard | nova/network/manager.py:flags.DEFINE_string('vlan_interface', None, | 13:55 |
jatsrt | nova-network will not start | 13:55 |
kbringard | it looks like it should still be vlan_interface | 13:55 |
jatsrt | that's what I thought | 13:55 |
kbringard | I'm running 1244 with --vlan_interface and it works | 13:56 |
jatsrt | but then in the linux_net.py, everything says bridge_interface | 13:56 |
jatsrt | may have to go through code diffs to see if something changed | 13:56 |
jatsrt | def ensure_vlan(vlan_num, bridge_interface): | 13:57 |
jatsrt | unless it thinks I am no longer clan network mode | 13:57 |
jatsrt | maybe that changed? | 13:57 |
kbringard | yea, interesting | 13:58 |
kbringard | nova/network/linux_net.py: _execute('sudo', 'vconfig', 'add', bridge_interface, vlan_num) | 13:58 |
kbringard | there you go | 13:58 |
kbringard | probably still working for me because I have not removed the vlan tagging since I updated | 13:58 |
kbringard | but | 13:58 |
kbringard | check it | 13:58 |
*** Jedicus2 has quit IRC | 13:58 | |
kbringard | hrmm, n/m, that's in the tests | 13:59 |
kbringard | bin/nova-manage: bridge_interface = FLAGS.flat_interface or FLAGS.vlan_interface | 13:59 |
kbringard | only in nova-manage? | 13:59 |
jatsrt | weird | 14:00 |
kbringard | indeed | 14:00 |
kbringard | seems like a bug? :-/ | 14:00 |
jatsrt | wonderful | 14:00 |
jatsrt | the morning I try to update :-( | 14:00 |
kbringard | I'm really not sure... don't know why it would have changed | 14:00 |
kbringard | which is to say, I don't know what the reasoning for a change was | 14:00 |
jatsrt | right | 14:01 |
kbringard | did you do a db sync? | 14:01 |
kbringard | I see a migration that adds the bridge_interface column | 14:01 |
jatsrt | yes I did | 14:01 |
mpatel | kbringard: when the instances are started, the network gets screwed up and when I remove the network configurations for virbr0 interface the external network works fine, so what could be the wrong with virtual network configuration | 14:02 |
jatsrt | I'll look, maybe that is what it is, not populated in the net table | 14:02 |
kbringard | jatsrt: I didn't look in depth at how it's used, I just happened to notice it while grepping | 14:02 |
*** marrusl has quit IRC | 14:02 | |
kbringard | mpatel: hmmmm | 14:02 |
kbringard | how many NICs do you have in the machine? | 14:03 |
mpatel | two | 14:03 |
jatsrt | tracing through the code that might make sense, looks like that brindge_interface param that is None is coming from the DB | 14:03 |
jatsrt | two nics | 14:03 |
kbringard | jatsrt: sorry, I was asking mpatel :-) | 14:03 |
jatsrt | ha | 14:03 |
kbringard | jatsrt: I wonder if you recreate your networks | 14:04 |
jatsrt | yeah, going to try now | 14:04 |
kbringard | that could be why it's only in nova-manage | 14:04 |
*** zenmatt has joined #openstack | 14:04 | |
kbringard | in theory that would populate that field | 14:04 |
mpatel | brctl shows br100 and virbr0 interface | 14:04 |
kbringard | mpatel: what are they attaching to? | 14:05 |
kbringard | and what does virbr0 belong to? | 14:05 |
kbringard | is that the default network that is coming up with libvirt? | 14:05 |
jatsrt | "update networks set bridge_interface = 'eth1';" did it | 14:05 |
kbringard | jatsrt: hot | 14:05 |
mpatel | br100 is attached to eth1 and virbr0 to none | 14:05 |
kbringard | well done sir | 14:05 |
*** dprince has joined #openstack | 14:05 | |
* kbringard updates his DBs too | 14:06 | |
*** marrusl has joined #openstack | 14:06 | |
kbringard | mpatel: is virbr0 part of your openstack setup? | 14:06 |
kbringard | or is that the default network that libvirt sets up when it's installed? | 14:06 |
mpatel | no I didn't configured anything for virbr0 not sure how it got deployed | 14:07 |
mpatel | I believe it's default | 14:07 |
kbringard | try running | 14:07 |
kbringard | virsh net-destroy default | 14:08 |
mpatel | also I see some SNT and DNAT rules in iptables | 14:08 |
*** errr_ is now known as errr | 14:08 | |
*** errr has joined #openstack | 14:08 | |
mpatel | ok that command removed the virbr0 interface | 14:09 |
kbringard | ok cool, so then do | 14:09 |
kbringard | virsh net-undefine default | 14:09 |
kbringard | that should keep it from starting at boot | 14:10 |
kbringard | seems that was interfering with your OpenStack setup for some reason | 14:10 |
kbringard | I think by default it's on 192.168.122.0/24 | 14:10 |
fulc | Hello, I just managed to install a dualnode configuration using StackOps. The smart installer is at http://address:8888/, but when I try using the https://address:8773/ to actually have it running, i get this error (Error code: ssl_error_rx_record_too_long) | 14:10 |
*** ldlework has joined #openstack | 14:11 | |
kbringard | fulc: are you using FireFox 3? | 14:12 |
kbringard | or 4? | 14:12 |
mpatel | is there something I need to make sure while installing openstack to avoid those issues | 14:12 |
fulc | it says 5 | 14:12 |
kbringard | sometimes Firefox gives that error when SSL is running on a non standard port | 14:12 |
kbringard | fulc: for esses and gees maybe try a different browser? | 14:12 |
fulc | ooh, so what would you recommend for ubuntu? | 14:12 |
kbringard | hrmm | 14:13 |
kbringard | chrome maybe? | 14:13 |
fulc | i have always used firefox | 14:13 |
kbringard | yea, I dunno... I'm one of those dirty Mac users | 14:13 |
fulc | safari? | 14:13 |
kbringard | yea, I use Safari and Firefox (but I've never used stackops, so I'm not sure about that specific issue) | 14:13 |
kbringard | but I have seen that exact error when trying to login to the remote console on a blade enclosure | 14:14 |
kbringard | in FF | 14:14 |
kbringard | I'd probably try Chrome | 14:14 |
kbringard | mpatel: in theory it shouldn't conflict, but it seems something about your setup is causing it to | 14:14 |
fulc | yeah, i dont think this error is connected to openstack | 14:14 |
kbringard | mpatel: when I install compute nodes, part of my install script is to run those virsh commands | 14:14 |
*** Jedicus2 has joined #openstack | 14:15 | |
kbringard | so I guess just make removing them part of your deploy procedure | 14:15 |
*** cweidenk1ller is now known as cweidnekeller | 14:15 | |
mpatel | kbringard: is your script is available to public | 14:16 |
huslage | how does openstack do locality with volumes? does it try to put a machine and its volume on the same machine if i'm running nova-volume on multiple machines? | 14:16 |
kbringard | mpatel: not really, but it's just a few apt-get commands | 14:16 |
*** cweidnekeller is now known as cweidenkeller | 14:16 | |
mpatel | kbringard: what are those few commands, so I can try | 14:17 |
kbringard | mpatel: http://paste.openstack.org/show/1849/ | 14:17 |
kbringard | huslage: good question... I'm not sure... I've only ever run one instance of nova-volume | 14:17 |
huslage | kbringard: there is so much to do :) | 14:18 |
kbringard | in our environment we've opted to use mostly NFS exports | 14:18 |
*** cweidenkeller has quit IRC | 14:18 | |
kbringard | so I have not messed with nova-volume a ton, beyond just the basics | 14:18 |
mpatel | kbringard: I also see some SNAT and DNAT rules with MASQUERADE network 192.168.122.0/24, so what are those for | 14:18 |
*** circloud has quit IRC | 14:19 | |
kbringard | mpatel: that is part of the default network that we destroyed | 14:19 |
*** jkoelker has joined #openstack | 14:19 | |
kbringard | in theory you should be able to flush the iptables and then restart nova-network to get rid of them | 14:19 |
kbringard | but, ymmv | 14:19 |
*** circloud has joined #openstack | 14:19 | |
jatsrt | what is "agent" in nova-manage? | 14:19 |
jatsrt | that's new in the past few week, I have so much catching up to do | 14:20 |
*** cweidenkeller has joined #openstack | 14:20 | |
kbringard | jatsrt: not sure... based on the context I'd guess it has to do with the new scheduler and zone stuff | 14:21 |
jatsrt | " arguments: os architecture version url md5hash" | 14:21 |
jatsrt | I'm intrigued | 14:21 |
fulc | ugh, some other error appears in chrome | 14:21 |
fulc | error 107 | 14:22 |
fulc | ssl again | 14:22 |
kbringard | yea | 14:22 |
kbringard | sounds like something is wonky in the SSL setup in the stackops thing | 14:22 |
kbringard | someone suggested disable SSl2 and enable SSL2? | 14:23 |
kbringard | err | 14:23 |
kbringard | sorry | 14:23 |
kbringard | disable SSL3 | 14:23 |
fulc | lemme try | 14:23 |
kbringard | that's in an old (3/20/2010) chrome support thread about the same thing | 14:23 |
kbringard | if that works, it may work in FF as well | 14:24 |
*** amccabe has joined #openstack | 14:24 | |
fulc | Error 112 (net::ERR_NO_SSL_VERSIONS_ENABLED) | 14:24 |
kbringard | lol | 14:24 |
fulc | zzzzzzzz | 14:24 |
kbringard | dude, I dunno | 14:24 |
kbringard | haha | 14:24 |
fulc | this is annoying | 14:24 |
kbringard | I've never used stackops, I have no idea what they're up to | 14:24 |
kbringard | is that that dell thing? | 14:25 |
*** jonkelly has joined #openstack | 14:25 | |
fulc | uhm | 14:25 |
fulc | i dont think so | 14:25 |
kbringard | no, I guess not (sorry, just not googling it) | 14:26 |
kbringard | now* | 14:26 |
*** icarus901 has quit IRC | 14:26 | |
fulc | it's just a distro with openstack already inside, so you dont bother getting all the dependencies right | 14:26 |
kbringard | yea, makes sense | 14:26 |
nagyz | back | 14:26 |
*** alexn65 has joined #openstack | 14:27 | |
nagyz | kbringard, so, I've started up a new instance, and now having a look at tcpdump... | 14:27 |
nagyz | :) | 14:27 |
nagyz | I see the DHCP requests going out on br100 | 14:28 |
nagyz | but no reply | 14:28 |
*** bcwaldon has joined #openstack | 14:28 | |
nagyz | and I don't see them on the first compute node | 14:28 |
kbringard | do you see them on the network controller? | 14:28 |
nagyz | first compute node = network controller | 14:28 |
kbringard | ah | 14:28 |
nagyz | but no, I can't say I do | 14:28 |
kbringard | are your physical switch ports configured correctly for the VLANs? | 14:29 |
nagyz | well, it's a virtual switch inside vmware... | 14:29 |
kbringard | I had that issue... one of my 16 blades' ports wasn't configured | 14:29 |
kbringard | ah | 14:29 |
kbringard | hmmm | 14:29 |
nagyz | vlan should work OOB there | 14:29 |
nagyz | IIRC | 14:29 |
kbringard | yea, I'd think so | 14:30 |
nagyz | what could I try? | 14:30 |
kbringard | so, another thing jatsrt just figured out | 14:31 |
*** cp16net_ has joined #openstack | 14:31 | |
kbringard | it looks like bridge_interface needs to be set in the DB | 14:31 |
kbringard | if you're running the latest trunk | 14:31 |
jatsrt | yep | 14:31 |
nagyz | no, it's the bundled version in natty | 14:31 |
kbringard | it looks like it's replaced vlan_interface | 14:31 |
jatsrt | update bridge_interface | 14:31 |
nagyz | but it's not trunk. :) | 14:31 |
jatsrt | easier that deleting and recreating all of your networks | 14:31 |
nagyz | I use the same nova.conf in both compute nodes, just changed myip | 14:31 |
kbringard | nagyz: ah... well, if you see bridge_interface in the DB i'd make sure it's populated | 14:32 |
nagyz | even on cactus? | 14:32 |
kbringard | I'm not sure when it was introduced | 14:32 |
kbringard | oh | 14:32 |
kbringard | well | 14:32 |
kbringard | then | 14:32 |
kbringard | haha | 14:32 |
kbringard | no idea | 14:32 |
nagyz | I had a look at the network table, and it has a "bridge" field | 14:32 |
jatsrt | has anyone tried to pause/suspend VMs | 14:32 |
kbringard | jatsrt: I have not | 14:32 |
jatsrt | nagyz: not the same issue then | 14:33 |
kbringard | we mostly use the ec2 api | 14:33 |
jatsrt | recently they added a bridge_interface field too | 14:33 |
nagyz | is there an easy way to check if both machines see each other thru the vlan? | 14:33 |
nagyz | just for testing can I add a random IP to br100? | 14:33 |
alexn65 | hi! can someone tell me about network, bridging and public IPs? For now in flat network mode instance have private net ip and do not see inet | 14:33 |
nagyz | two from the same subnet of course on the two hosts | 14:33 |
jatsrt | kbringard: just thinking if I have to reboot a compute node for maintenance and don't want to live migrate I could suspend unsuspend them | 14:33 |
nagyz | well, br100 on the host that has nova-network already has 10.0.0.1 | 14:34 |
kbringard | jatsrt: yea, it seems like a good feature, I've just not played with it | 14:34 |
kbringard | nagyz: I'd bring up a new interface | 14:34 |
mpatel | kbringard: now can't reach through the public IP on eth0 interface after restarting the nova-network | 14:34 |
kbringard | like, eth1:1 or something | 14:34 |
nagyz | that won't be part of the vlan | 14:34 |
kbringard | if eth1 is tagged | 14:34 |
kbringard | it should work | 14:34 |
kbringard | that's how I test | 14:35 |
nagyz | I did add 9.9.9.1 to eth1:1 on the second compute node | 14:35 |
nagyz | and 9.9.9.2 to the first compute node on eth1:1 | 14:35 |
nagyz | and they can ping each other | 14:36 |
kbringard | and if you cat /proc/net/vlan/config you see that eth1 is tagged with the correct vlan on both nodes? | 14:36 |
kbringard | ah, then that would indicate that the vlan tagging is working properly | 14:36 |
nagyz | yes | 14:36 |
nagyz | they both has id 100 | 14:36 |
nagyz | *have | 14:36 |
jaypipes | kbringard: you're up early today... | 14:37 |
kbringard | jaypipes: hah | 14:37 |
kbringard | I usually start work at 7 Mountain | 14:37 |
*** [1]RickB17 has joined #openstack | 14:37 | |
jaypipes | kbringard: either that or never went to sleep ;) | 14:37 |
kbringard | but I keep my mouth shut until I've had more than 3 or 4 cups of coffee ;-) | 14:37 |
jaypipes | :) | 14:37 |
kbringard | this morning I made the mistake of talking too soon ;-) | 14:37 |
nagyz | but if I add an ip from the 10.0.0.0/24 range to br100 on the second node, should that be able to ping 10.0.0.1 on the first node? | 14:37 |
kbringard | in theory | 14:38 |
nagyz | lets see.. | 14:38 |
huslage | i hate computers | 14:38 |
nagyz | ok, added the IP | 14:38 |
nagyz | nothing happens | 14:38 |
kbringard | I'd also try adding like 10.0.0.10 (or some unused address) to eth1:1 and see if that can ping 10.0.0.1 | 14:38 |
nagyz | I don't even see the ping coming from cn1 | 14:38 |
nagyz | hm | 14:38 |
nagyz | lets see | 14:38 |
*** cweidenkeller has quit IRC | 14:39 | |
nagyz | no, it cant | 14:39 |
*** cweidenkeller has joined #openstack | 14:39 | |
nagyz | and I don't even see that ping coming on br100 on the first node | 14:39 |
alexn65 | does gateway on br100 need special conf? iptables or so? | 14:39 |
kbringard | hey jaypipes, do you know what this AgentBuildCommands stuff in nova-manage is all about? | 14:40 |
*** vladimir3p has joined #openstack | 14:40 | |
kbringard | I'm intrigued, because it looks like it may be intended to build out nodes | 14:41 |
jaypipes | kbringard: hmm, I'm actually not sure about that. | 14:41 |
jaypipes | dabo: you know about that one? | 14:41 |
kbringard | it's not really a big deal, mostly just curious | 14:41 |
jaypipes | in unrelated news, I'm still getting used to the new Unity desktop in 11.04... upgraded last night... | 14:42 |
*** dysinger has joined #openstack | 14:42 | |
kbringard | hah | 14:42 |
kbringard | <--- running OSX | 14:42 |
kbringard | so I'll be getting used to the new Lion interface soon ;-) | 14:42 |
*** 14WABGEOF has joined #openstack | 14:43 | |
jaypipes | kbringard: indeed | 14:43 |
jatsrt | kbringard: already have Lion GM | 14:44 |
*** alexn65 has left #openstack | 14:44 | |
huslage | kbringard: Lion's pretty good | 14:45 |
huslage | except that my machine randomly tuns off | 14:45 |
huslage | which is kind of annoying | 14:45 |
jatsrt | huslage: new GM release seems more stable now | 14:45 |
kbringard | hah, that seems counter productive | 14:45 |
huslage | jatsrt: that's the one i'm running | 14:46 |
kbringard | I saw that it was available in my dev downloads | 14:46 |
huslage | on a 2010 Core i7 MBP | 14:46 |
huslage | it's not THAT old of a machine lol | 14:46 |
kbringard | but I was busy drinking over the weekend :-p | 14:46 |
jatsrt | same as me! | 14:46 |
huslage | kbringard: good idea | 14:46 |
*** rchavik has quit IRC | 14:46 | |
jatsrt | he | 14:46 |
huslage | anyone have any idea why the dashboard is giving me fits? | 14:48 |
huslage | Could not import dashboard.views. Error was: No module named routes | 14:48 |
huslage | my django fu is pitiful | 14:48 |
*** chetan has quit IRC | 14:49 | |
kbringard | can't be worse than mine | 14:49 |
huslage | hah | 14:49 |
huslage | this stuff needs to GET TOGETHER | 14:49 |
huslage | too much potential and not enough coordination | 14:50 |
doude_ | Hi, who use Nova in a multizone mode ? | 14:55 |
deshantm | For development purposes, is it OK/recommended to use Ubuntu 11.04 or Ubuntu unstable as the base? (for context we are porting Xen Cloud Platform to Ubuntu) | 14:56 |
kbringard | I don't know that I'd say it's recommended... but I think if you want to use Xen as your hypervisor on Ubuntu you'll have to run it on 11.10 | 14:56 |
*** patcoll has joined #openstack | 14:57 | |
kbringard | since modern versions of Xen aren't really supported before that | 14:57 |
bastichelaar | still nobody here who uses lxc as hypervisor? | 14:58 |
deshantm | kbringard: thanks, yeah that's what we are going to target (as well as debian unstable) | 14:58 |
kbringard | I've never tried it, but soren tells me it works | 14:58 |
*** Shentonfreude has joined #openstack | 14:59 | |
kbringard | bbiab | 15:03 |
*** dragondm has joined #openstack | 15:03 | |
*** huslage has quit IRC | 15:07 | |
*** reidrac has quit IRC | 15:12 | |
*** ccc11 has quit IRC | 15:14 | |
doude_ | Who can help me to use the multizone mode ? | 15:18 |
*** foxtrotdelta has quit IRC | 15:18 | |
*** rupakg has quit IRC | 15:18 | |
*** rupakg has joined #openstack | 15:18 | |
nagyz | anyone know how can I set up a vsphere vswitch so it won't eat my vlan tagged packets? | 15:20 |
*** med_out is now known as medberry | 15:21 | |
*** 14WABGEOF has quit IRC | 15:23 | |
*** mpatel has quit IRC | 15:23 | |
*** huslage has joined #openstack | 15:25 | |
*** ccc11 has joined #openstack | 15:27 | |
kbringard | smoser: not really related to OpenStack directly, but if you have a few moments, I have some questions about cloud-init | 15:27 |
smoser | kbringard, i'd be happy to ask, but i'm actually on holiday now. will return Monday. | 15:28 |
kbringard | oh, sorry, in that case I won't bug ya | 15:28 |
kbringard | enjoy your holiday! | 15:28 |
*** huslage has quit IRC | 15:29 | |
Jbain | nagyz: are you wanting to pass the tagged packets to a vm? | 15:30 |
*** dobber has quit IRC | 15:30 | |
nagyz | I want to set the switch to trunk mode | 15:30 |
nagyz | but I've found it | 15:30 |
nagyz | had to set the vlan id to 4095 | 15:31 |
Jbain | interesting | 15:31 |
*** mancdaz has quit IRC | 15:31 | |
nagyz | lets see if it works now | 15:32 |
*** Ephur has joined #openstack | 15:34 | |
doude_ | In multizone mode, no host filter is applied. I set flag 'default_host_filter' to 'nova.scheduler.host_filter.HostFilterScheduler', but the filter doesn't return any host | 15:35 |
*** mancdaz has joined #openstack | 15:35 | |
*** KAM has left #openstack | 15:38 | |
heckj | deshantm: I'd recommend going with one of the LTS versions - which means 11.04 | 15:43 |
nagyz | so what would I need for live migration? | 15:47 |
nagyz | nova-manage vm live_migration doesnt work | 15:48 |
nagyz | I have the images on a shared nfs | 15:48 |
nagyz | and I'm using glance | 15:48 |
jatsrt | nagyz: what's not working about it? | 15:48 |
kbringard | are the images on shared nfs, or the instances directory? | 15:48 |
jatsrt | took me a week to get all the details right | 15:48 |
*** ccc11 has quit IRC | 15:48 | |
nagyz | well, I don't see any errors | 15:48 |
nagyz | but the machine stays on the original compute node | 15:48 |
kbringard | I don't think the images has to be on shared storage, but the instances directory does | 15:48 |
jatsrt | instances directory | 15:48 |
nagyz | I only have the instance directory shared between the nodes | 15:48 |
jatsrt | correct | 15:48 |
nagyz | instances, yes. | 15:49 |
jatsrt | I also put a note on one of the wiki's about a libvirt param that needed to be set | 15:49 |
nagyz | I haven't seen that | 15:49 |
nagyz | http://pastebin.com/SFDtqKMr | 15:49 |
kbringard | ah, yea, I remember that now... | 15:49 |
jatsrt | the good news Is I can 100% say it does work | 15:49 |
nagyz | this is the scheduler log | 15:49 |
nagyz | what param? | 15:49 |
nagyz | :) | 15:50 |
jatsrt | make sure you are looking at your compute log too | 15:50 |
jatsrt | on bot source and destination | 15:50 |
nagyz | no errors per say | 15:50 |
nagyz | oh, I found it | 15:50 |
nagyz | listen_tls | 15:50 |
jatsrt | don't remember the specific param, but you need to hard set the user | 15:50 |
jatsrt | you need to make sure you modify all the files listed in the wiki | 15:51 |
*** medberry is now known as med_out | 15:51 | |
jatsrt | that sets it up so it can allow remote communication | 15:51 |
nagyz | doing that now, sec | 15:51 |
jatsrt | between libvirt processes | 15:51 |
jatsrt | also, are you sure your permissions on your nfs mounts are all straight | 15:52 |
jatsrt | finally, make sure all of your compute nodes are the same architecture | 15:52 |
jatsrt | but that's for later | 15:52 |
*** marrusl has quit IRC | 15:53 | |
nagyz | I have the same uids for the users on both cns | 15:54 |
jatsrt | ok | 15:54 |
nagyz | ok, set up libvirt | 15:54 |
nagyz | lets see | 15:54 |
nagyz | it sais it migrated it :) | 15:55 |
kbringard | nagyz: sorry, I forgot about the libvirt stuff that had to be set | 15:55 |
nagyz | and it did | 15:55 |
nagyz | it's working :) | 15:56 |
jatsrt | congrats :-) | 15:56 |
kbringard | like I said, it's been awhile since I used it :-) | 15:56 |
nagyz | if you're ever in Switzerland, I'm buying a beer ;) | 15:56 |
nagyz | (or I'll be in Chicago in oct, if everything goes as planned...) | 15:56 |
*** lborda has joined #openstack | 15:58 | |
*** KAM has joined #openstack | 15:58 | |
*** dgags has joined #openstack | 16:01 | |
*** Razique has joined #openstack | 16:03 | |
*** marrusl has joined #openstack | 16:05 | |
Razique | Hi all | 16:08 |
MarkusWPHRorg | Hi Razique | 16:08 |
Razique | something went fubar with my san connection, now, im' unable to attach volumes to instances | 16:08 |
MarkusWPHRorg | Morning all (YMMV depending on time zone) | 16:08 |
Razique | the attachement goes fines, but the instance doesn't see it | 16:08 |
Razique | while nova considers it as attached, and I see it on the compute node | 16:08 |
Razique | any clue ? | 16:08 |
*** cp16net_ has quit IRC | 16:09 | |
*** koolhead17 is now known as koolhead11|Afk | 16:09 | |
MarkusWPHRorg | Razique: what kind of HBA are you using? | 16:09 |
*** MarcMorata has quit IRC | 16:10 | |
MarkusWPHRorg | or are you using an HBA/FC switch? | 16:10 |
Razique | MarkusWPHRorg: it's an HP SAN connected with iscsi | 16:10 |
*** maplebed has joined #openstack | 16:10 | |
MarkusWPHRorg | over fiber channel or ethernet? | 16:11 |
Razique | eth | 16:11 |
*** willaerk has quit IRC | 16:12 | |
*** maplebed has quit IRC | 16:12 | |
*** maplebed has joined #openstack | 16:13 | |
*** bcwaldon has quit IRC | 16:14 | |
*** mdomsch has joined #openstack | 16:16 | |
nagyz | be back tomorrow | 16:16 |
nagyz | have a nice day all | 16:16 |
*** nagyz has quit IRC | 16:16 | |
doude_ | Who can help me to use the multizone mode ? | 16:17 |
doude_ | In multizone mode, no host filter is applied. I set flag 'default_host_filter' to 'nova.scheduler.host_filter.HostFilterScheduler', but the filter doesn't return any host | 16:17 |
*** j05h has joined #openstack | 16:17 | |
*** Eyk is now known as Eyk^off | 16:18 | |
Razique | it's like my instance's libvirt.xml doesn't contain the link to that volume | 16:18 |
Razique | weird | 16:18 |
*** joearnold has joined #openstack | 16:18 | |
Razique | when I try to dettach it : http://paste.openstack.org/show/1852/ | 16:19 |
Razique | can I "purge" the attachments ? | 16:19 |
*** obino has quit IRC | 16:20 | |
*** peads has joined #openstack | 16:20 | |
dabo | jaypipes: Just got back, and saw your question about AgentBuildCommands. Ask Johannes Erdfelt or comstud about that. | 16:22 |
*** fulc has quit IRC | 16:23 | |
*** cp16net_ has joined #openstack | 16:24 | |
*** huslage has joined #openstack | 16:24 | |
*** ziyadb has quit IRC | 16:25 | |
*** joearnold has quit IRC | 16:26 | |
*** joearnold has joined #openstack | 16:27 | |
*** HouseAway has quit IRC | 16:27 | |
*** cp16net_ has quit IRC | 16:27 | |
*** HouseAway has joined #openstack | 16:27 | |
kbringard | dabo: cool, thanks | 16:27 |
kbringard | (jay was asking for me :-)) | 16:27 |
*** huslage has quit IRC | 16:28 | |
Razique | could anyone help me ? :D kinda f****d up here :/ | 16:29 |
MarkusWPHRorg | Razique: does HP run a linux-base iscsi | 16:30 |
Razique | think so yes | 16:30 |
*** zenmatt has quit IRC | 16:31 | |
*** zenmatt has joined #openstack | 16:31 | |
Razique | when you attach a volume, the libvirt on the compute node adds line into the libvirt.xml right ? | 16:32 |
Razique | some <disk> entries I guess | 16:32 |
*** ziyadb has joined #openstack | 16:32 | |
*** Capashen has quit IRC | 16:33 | |
*** jtran has joined #openstack | 16:33 | |
*** huslage has joined #openstack | 16:37 | |
huslage | oh wow. internets | 16:38 |
huslage | magic | 16:38 |
Razique | actually, I figured that I just need to "reset" the attachment | 16:40 |
Razique | and everything should be ok by now ; i just created a volume and successfully dettached and reattached it | 16:40 |
comstud | kbringard: nova maintains a list of the guest agent versions by OS/arch and so forth. Those AgentBuildCommands are there so you can update the table with what the current version of the agent should be... and the url and md5sum of the binary pkg. | 16:40 |
*** jdurgin has joined #openstack | 16:40 | |
comstud | kbringard: This is so when nova fires up a build, it can check the version of the guest agent and auto-update it to a newer version, if one exists in the table. | 16:40 |
*** KAM has left #openstack | 16:41 | |
huslage | comstud: agent? | 16:41 |
*** ziyadb_ has joined #openstack | 16:41 | |
*** ziyadb_ has joined #openstack | 16:41 | |
*** PiotrSikora has quit IRC | 16:41 | |
comstud | The guest agent is xenserver-only right now and can be found at lp:openstack-guest-agents | 16:41 |
kbringard | comstud: ah, nice. so it's useful in managing your entire cluster and making sure everything is in sync | 16:41 |
huslage | oh | 16:41 |
huslage | xen | 16:41 |
huslage | ugh | 16:41 |
comstud | huslage: auto-configures a VM | 16:41 |
huslage | like cloud-init? | 16:41 |
huslage | or more like puppet or chef? | 16:41 |
comstud | not familiar with cloud-init.. not really puppet or chef, either | 16:42 |
comstud | A guest agent is needed with xenserver so it can configure the IP adderss | 16:42 |
kbringard | cloud-init is the stuff that ubuntu wrote for querying the meta-data API for ssh keys and stuff | 16:42 |
comstud | address.. set the root password... | 16:42 |
comstud | and things like that. | 16:42 |
comstud | Ok, yeah, similar to that, then. | 16:42 |
huslage | yeah kbringard it's pretty spiffy | 16:43 |
kbringard | that sounds spiffy... cloud-init is nice but it's a beeyatch to get running on anything other than Ubuntu :-) | 16:43 |
comstud | except the agent accepts commands from nova.. so things are pushed instead of pulled, I guess you could say. | 16:43 |
huslage | why is it xen only? | 16:43 |
comstud | huslage: it was originally internal code at rackspace.. we run xen | 16:44 |
huslage | ah | 16:44 |
comstud | it communicates via xenstore | 16:44 |
huslage | k | 16:44 |
comstud | but it's pluggable now.. | 16:44 |
kbringard | and you guys are using Cent there, yea? | 16:44 |
comstud | you can swap that piece out for another communication layer. | 16:44 |
kbringard | or RH | 16:44 |
*** ziyadb has quit IRC | 16:44 | |
*** obino has joined #openstack | 16:44 | |
comstud | kbringard: not for cloud | 16:44 |
comstud | other areas of the biz use RH i _think_ | 16:44 |
kbringard | if I may ask, what OS are you running Xen on? | 16:45 |
huslage | comstud just works there | 16:45 |
huslage | hehe | 16:45 |
comstud | kbringard: I'm not sure what Ops is using. | 16:45 |
comstud | We're using debian in dev. | 16:45 |
comstud | (squeeze) | 16:45 |
*** PiotrSikora has joined #openstack | 16:45 | |
kbringard | ah, OK | 16:45 |
kbringard | makes sense | 16:45 |
comstud | some folks are using ubuntu | 16:45 |
kbringard | I'm sort of waiting for 11.10 before messing much with Xen | 16:47 |
kbringard | I've had limited success getting OpenStack to run on anything but Ubuntu, so I just don't want to mess with it, heh | 16:48 |
comstud | Oh | 16:48 |
comstud | sorry, I should clarify... | 16:48 |
*** jatsrt has quit IRC | 16:48 | |
comstud | I was thinking about our dev environments... | 16:48 |
comstud | We're running Citrix XenServer | 16:48 |
comstud | (which appears to be centos based) | 16:49 |
kbringard | ah, OK | 16:49 |
kbringard | that makes sense | 16:49 |
*** MarkAtwood has joined #openstack | 16:51 | |
*** zul has quit IRC | 16:52 | |
*** mihgen_ has joined #openstack | 16:53 | |
*** bcwaldon has joined #openstack | 16:55 | |
*** zul has joined #openstack | 16:55 | |
*** lorin1 has joined #openstack | 16:57 | |
*** bastichelaar has quit IRC | 16:57 | |
*** zenmatt has quit IRC | 16:58 | |
*** berto- has joined #openstack | 16:59 | |
kbringard | soren: you around? | 17:02 |
*** skraps has joined #openstack | 17:02 | |
*** mdomsch has quit IRC | 17:02 | |
*** berto- has quit IRC | 17:02 | |
*** stephan` has joined #openstack | 17:05 | |
*** pguth66 has joined #openstack | 17:06 | |
*** berto- has joined #openstack | 17:06 | |
huslage | in flatDHCP, can the nodes all talk to one another via the internal IP? | 17:07 |
kbringard | huslage: it depends | 17:07 |
huslage | i want to start an internal-only mysql server, for instance | 17:08 |
*** ohnoimdead has joined #openstack | 17:08 | |
kbringard | well... there is a flag --allow_project_net_traffic | 17:08 |
kbringard | I think it's true by default | 17:08 |
huslage | ok | 17:08 |
kbringard | oh wait | 17:08 |
kbringard | that's vlan mode | 17:08 |
kbringard | I think | 17:08 |
kbringard | sorry, ignore me | 17:08 |
huslage | i can't do vlan mode yet. waiting for the Real Hardware | 17:08 |
kbringard | <--- not enough coffee | 17:08 |
kbringard | I don't know much about flatDHCP, sorry | 17:09 |
*** deshantm_laptop has joined #openstack | 17:09 | |
huslage | ok…how do i return an address to the floating pool? | 17:10 |
huslage | it seems to never reuse | 17:10 |
*** irahgel has quit IRC | 17:11 | |
*** mdomsch has joined #openstack | 17:12 | |
*** erik-s has quit IRC | 17:13 | |
*** erik-s has joined #openstack | 17:14 | |
*** kd926 has joined #openstack | 17:15 | |
kd926 | I'm trying to start nova on gentoo and I get a trace when doing nova-manage db sync: | 17:15 |
kd926 | TypeError: immutabledict object is immutable | 17:15 |
*** CatKiller has quit IRC | 17:17 | |
MarkusWPHRorg | Doesn anyone here know anything about running Openstack with rPath? | 17:19 |
MarkusWPHRorg | If so, please PM me.... | 17:19 |
*** AhmedSoliman has joined #openstack | 17:20 | |
*** joearnold has quit IRC | 17:20 | |
*** joearnold has joined #openstack | 17:20 | |
*** dendro-afk is now known as dendrobates | 17:20 | |
*** ziyadb_ has quit IRC | 17:21 | |
*** po has quit IRC | 17:21 | |
*** AhmedSoliman has quit IRC | 17:22 | |
*** AhmedSoliman has joined #openstack | 17:23 | |
*** CatKiller has joined #openstack | 17:23 | |
*** Ryan_Lane has joined #openstack | 17:24 | |
kd926 | MarkusWPHRorg: can rPath export qemu compatible images? | 17:24 |
*** jheiss has joined #openstack | 17:25 | |
*** berto- has quit IRC | 17:26 | |
*** llang629 has joined #openstack | 17:26 | |
*** llang629 has left #openstack | 17:26 | |
MarkusWPHRorg | kd926: Sorry I'm very new to rPath | 17:27 |
MarkusWPHRorg | VERY new. But the outfit I work for ais a customer of theirs | 17:27 |
MarkusWPHRorg | so I was hoping some folks from there might be hanging out here so we could chat | 17:28 |
kd926 | the outfit wouldn't happen to have a green logo would it | 17:28 |
MarkusWPHRorg | Don't think so | 17:29 |
kd926 | ah alright, just curious | 17:29 |
MarkusWPHRorg | Who has a green logo? | 17:29 |
kd926 | so I am vaguely familiar with rPath | 17:29 |
kd926 | AMD | 17:29 |
*** jaypipes has quit IRC | 17:30 | |
kd926 | do you have the infrastructure running somewhere at your outfit? | 17:30 |
*** MarcMorata has joined #openstack | 17:30 | |
MarkusWPHRorg | not sure | 17:30 |
kd926 | you said they are a customer of rPath's product though? | 17:30 |
MarkusWPHRorg | Yeah, but I'm not sure if we have it functional yet | 17:31 |
kd926 | hmm | 17:31 |
MarkusWPHRorg | and as far as Openstack is concerned, that's something I'm looking into | 17:31 |
*** jaypipes has joined #openstack | 17:31 | |
*** berto- has joined #openstack | 17:31 | |
*** CatKiller has quit IRC | 17:37 | |
*** GeoDud has joined #openstack | 17:40 | |
*** huslage has quit IRC | 17:43 | |
*** katkee has quit IRC | 17:44 | |
*** CatKiller has joined #openstack | 17:49 | |
*** CatKiller has quit IRC | 17:50 | |
*** CatKiller has joined #openstack | 17:51 | |
*** circloud has quit IRC | 17:53 | |
*** nacx has quit IRC | 17:53 | |
*** negronjl_ has quit IRC | 17:54 | |
*** ryker has quit IRC | 17:56 | |
*** ryker has joined #openstack | 17:56 | |
*** zul has quit IRC | 17:59 | |
*** evil_e_ is now known as evil_e | 18:00 | |
*** stephan` has left #openstack | 18:02 | |
*** mgius has joined #openstack | 18:03 | |
*** mdomsch has quit IRC | 18:07 | |
*** jtran has quit IRC | 18:10 | |
*** obino1 has joined #openstack | 18:14 | |
*** AhmedSoliman has quit IRC | 18:16 | |
*** obino has quit IRC | 18:17 | |
*** j05h has quit IRC | 18:18 | |
*** j05h has joined #openstack | 18:18 | |
*** zul has joined #openstack | 18:21 | |
*** RickB17 has quit IRC | 18:27 | |
*** [1]RickB17 is now known as RickB17 | 18:27 | |
*** joearnold has quit IRC | 18:30 | |
*** mihgen_ has quit IRC | 18:32 | |
*** huslage has joined #openstack | 18:32 | |
*** Glace has joined #openstack | 18:35 | |
*** Glace is now known as Guest28522 | 18:35 | |
*** Guest28522 is now known as Glacee | 18:35 | |
*** bastichelaar has joined #openstack | 18:36 | |
*** kashyap has quit IRC | 18:37 | |
*** med_out is now known as medberry | 18:41 | |
*** skraps has quit IRC | 18:44 | |
*** ctennis has quit IRC | 18:46 | |
Razique | Hi, I need to run a fsck on a lvm volume but nova uses it | 18:47 |
Razique | how can I disable it ? lvchange -an doesn't work | 18:48 |
*** MarcMorata has quit IRC | 18:54 | |
*** zul has quit IRC | 18:55 | |
*** NeCRoManTe1 has joined #openstack | 19:02 | |
*** NeCRoManTe has quit IRC | 19:02 | |
*** MarkusWPHRorg has quit IRC | 19:07 | |
*** royh has joined #openstack | 19:09 | |
*** RoAkSoAx has quit IRC | 19:09 | |
*** RoAkSoAx has joined #openstack | 19:09 | |
*** thinkscientist has joined #openstack | 19:11 | |
*** katkee has joined #openstack | 19:12 | |
*** zul has joined #openstack | 19:12 | |
vishy | Razique: is it attached to an instance? | 19:14 |
Razique | nope | 19:14 |
Razique | I've stopped the whole nova cloud | 19:14 |
Razique | stopeed iscsi | 19:14 |
Razique | but still unable to run a lvchange -an | 19:14 |
vishy | hmm | 19:15 |
vishy | something must still have it open | 19:15 |
vishy | try an lsof on it | 19:15 |
*** mrmartin has joined #openstack | 19:16 | |
*** obino has joined #openstack | 19:20 | |
*** thinkscientist has quit IRC | 19:21 | |
*** obino1 has quit IRC | 19:24 | |
*** 92AADFGZE has joined #openstack | 19:27 | |
*** katkee has quit IRC | 19:27 | |
*** morfeas has joined #openstack | 19:28 | |
*** joearnold has joined #openstack | 19:29 | |
*** katkee has joined #openstack | 19:29 | |
*** ryker has quit IRC | 19:32 | |
bastichelaar | anyone here using LXC? | 19:36 |
*** ryker has joined #openstack | 19:38 | |
uvirtbot | New bug: #806647 in nova "NBD device doesn't get disconnected after terminating LXC instance" [Undecided,New] https://launchpad.net/bugs/806647 | 19:42 |
*** obino has quit IRC | 19:42 | |
*** obino has joined #openstack | 19:43 | |
*** syah has joined #openstack | 19:48 | |
*** ctennis has joined #openstack | 19:53 | |
*** mies has quit IRC | 19:55 | |
uvirtbot | New bug: #806653 in nova "expose utils.usage_from_instance in nova api" [Undecided,New] https://launchpad.net/bugs/806653 | 19:56 |
*** zul has quit IRC | 19:59 | |
vishy | bastichelaar: is that your bug/patch? | 20:01 |
devcamcar | vishy: reading your ha-network bits | 20:03 |
devcamcar | vishy: sweet stuff | 20:03 |
bastichelaar | yes | 20:03 |
devcamcar | vishy: is vlan support in scope? | 20:03 |
vishy | devcamcar: vlan mode should work but it is costly | 20:03 |
devcamcar | vishy: how so? | 20:04 |
vishy | bastichelaar: thanks for finding that issue, i think it needs to be reworked a little bit but that is close to the right fix | 20:04 |
devcamcar | vishy: does br100 go away? | 20:04 |
vishy | in vlan mode, it will create all vlans on all hosts | 20:04 |
vishy | and every host needs an ip in every project network, so it burns a lot of ips | 20:05 |
vishy | not that big of a deal for a 12 node cluster | 20:05 |
bastichelaar | vishy: thanks. I'm not a good programmer, but it seems that the LXC stuff is really basic and not used a lot, right? | 20:06 |
vishy | but for a 255+ that is killing a /24 for every project | 20:06 |
WormMan | I had no problem not giving my compute nodes IPs in the added network | 20:06 |
WormMan | but I didn't try VLAN | 20:06 |
*** jtran has joined #openstack | 20:06 | |
WormMan | er in the project network | 20:06 |
vishy | bastichelaar: most people using LXC are not using cow images i think | 20:06 |
vishy | WormMan: true in the current code, we're talking about the ha-net branch | 20:06 |
vishy | using multi_host networks | 20:07 |
bastichelaar | vishy: what are they using then? I'm quite new in this area... | 20:08 |
*** tonycampbell has joined #openstack | 20:08 | |
heckj | vishy: where do the floating IPs get allocated in the ha-net branch? Spread around the various node, or collocated with the VMs? | 20:09 |
vishy | collocated | 20:09 |
vishy | floating ips are allocated to the host that the vm is on | 20:10 |
vishy | and natted to the fixed_ip | 20:10 |
vishy | bastichelaar: --nouse_cow_images | 20:10 |
*** tonycampbell is now known as tcampbell | 20:10 | |
vishy | will use losetup mounts instead of nbd | 20:10 |
vishy | should be a bit faster | 20:10 |
bastichelaar | vishy: thanks, I just discovered that option :) | 20:10 |
vishy | in any case your bug should be fixed so thanks for reporting it | 20:11 |
bastichelaar | no problem, hope to find more bugs ;) | 20:11 |
*** msivanes has quit IRC | 20:12 | |
bastichelaar | but it seems that the lxc implementation in libvirt is quite poor | 20:12 |
*** msivanes has joined #openstack | 20:13 | |
*** willaerk has joined #openstack | 20:13 | |
*** dprince has quit IRC | 20:26 | |
*** jtran has quit IRC | 20:28 | |
*** winston-d has quit IRC | 20:37 | |
*** bpaluch has joined #openstack | 20:41 | |
*** HouseAway is now known as AimanA | 20:43 | |
*** dendrobates is now known as dendro-afk | 20:51 | |
*** mies has joined #openstack | 20:54 | |
*** msivanes has quit IRC | 20:59 | |
*** lorin1 has quit IRC | 21:04 | |
vishy | bastichelaar: is it missing some things that you need? | 21:04 |
*** TREllis has joined #openstack | 21:09 | |
*** aliguori has quit IRC | 21:12 | |
*** aliguori has joined #openstack | 21:12 | |
TREllis | does nova bind a metadata service IP to an interface? | 21:13 |
*** mrmartin has quit IRC | 21:18 | |
*** huslage has quit IRC | 21:18 | |
bastichelaar | vishy: not really, but I'm struggling with a bug that causes libvirt_lxc to generate 100% cpu load when I start an instance | 21:20 |
vishy | hmm that is nasty | 21:20 |
vishy | do you have the newest libvirt? | 21:20 |
bastichelaar | this is the output of the log file | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | 23:02:11.837: 3722: error : lxcFdForward:287 : read of fd 7 failed: Input/output error | 21:20 |
bastichelaar | yes | 21:20 |
bastichelaar | sorry for spamming, should have used pastebin :) | 21:20 |
WormMan | ahh, the classic read from a closed pipe bug that every software product has had at least once :) | 21:20 |
bastichelaar | and it generates this message every millisecond, so the disk is filling up | 21:21 |
bastichelaar | but I cant get my head aroung this bug | 21:21 |
bastichelaar | or configuration error | 21:21 |
bastichelaar | dont know yet | 21:21 |
WormMan | run it under strace and see what's going on? (if you're trying to troubleshoot it) may be a bit too low level though | 21:22 |
*** NeCRoManTe1 has quit IRC | 21:22 | |
bastichelaar | ok, I will try to create a wrapper around libvirt_lxc and output the strace in a file | 21:22 |
*** NeCRoManTe has joined #openstack | 21:23 | |
*** _vinay has joined #openstack | 21:26 | |
_vinay | Hi | 21:27 |
TREllis | hmmm can't ssh to instance and notice it has issue contacting metadata service, virsh console shows "error: internal error character device (null) is not using a PTY" on the instance, any ideas? | 21:35 |
*** kbringard has quit IRC | 21:35 | |
*** patcoll has quit IRC | 21:37 | |
*** titaniumrain has joined #openstack | 21:43 | |
titaniumrain | . | 21:44 |
titaniumrain | anyone there? | 21:44 |
*** bcwaldon has quit IRC | 21:45 | |
*** dendro-afk is now known as dendrobates | 21:46 | |
*** bcwaldon has joined #openstack | 21:47 | |
_vinay | Hi, I am having trouble injecting data into an instance | 21:48 |
_vinay | libvir: QEMU error : Domain not found: no domain with matching name 'instance-00000001' | 21:49 |
_vinay | libvir: Network Filter error : Network filter not found: no nwfilter with matching name 'nova-instance-instance-00000001-secgroup | 21:49 |
_vinay | any idea how to go about debugging this | 21:49 |
_vinay | ? | 21:49 |
_vinay | from compute logs I see following error also | 21:51 |
_vinay | Command: sudo qemu-nbd -c /dev/nbd15 /nova/..//instances/instance-00000002/disk | 21:51 |
_vinay | Stderr: "qemu-nbd: Could not access '/dev/nbd15': No such file or directory\n") | 21:51 |
*** bcwaldon has quit IRC | 21:52 | |
*** matiu has joined #openstack | 21:53 | |
*** Deirz- has quit IRC | 22:00 | |
bastichelaar | Im not the expert here, but it looks like libvirt didnt start the instance | 22:00 |
bastichelaar | look in the logs before the entry you posted | 22:00 |
*** londo_ has joined #openstack | 22:01 | |
*** ldlework has quit IRC | 22:09 | |
ohnoimdead | ls | 22:22 |
ohnoimdead | i'm awesome | 22:22 |
*** berto- has quit IRC | 22:22 | |
devcamcar | my hero | 22:23 |
* ohnoimdead is a pro | 22:23 | |
*** berto- has joined #openstack | 22:24 | |
*** dendrobates is now known as dendro-afk | 22:26 | |
*** bastichelaar has quit IRC | 22:26 | |
*** nati has joined #openstack | 22:28 | |
*** amccabe has quit IRC | 22:30 | |
*** joearnold has quit IRC | 22:32 | |
*** 92AADFGZE has quit IRC | 22:32 | |
vishy | virsh console doesn't work normally on instances | 22:32 |
*** markvoelker has quit IRC | 22:32 | |
*** lool has quit IRC | 22:32 | |
*** jonkelly has quit IRC | 22:33 | |
*** jaypipes has quit IRC | 22:33 | |
*** jkoelker has quit IRC | 22:34 | |
*** Shentonfreude has quit IRC | 22:35 | |
*** dsockwell has quit IRC | 22:36 | |
*** lool has joined #openstack | 22:36 | |
*** dsockwell has joined #openstack | 22:37 | |
*** joearnold has joined #openstack | 22:38 | |
*** katkee has quit IRC | 22:44 | |
*** matiu has quit IRC | 22:45 | |
*** jaypipes has joined #openstack | 22:45 | |
*** matiu has joined #openstack | 22:45 | |
*** miclorb has joined #openstack | 22:46 | |
*** AhmedSoliman has joined #openstack | 22:47 | |
*** lvaughn has quit IRC | 22:51 | |
*** jaypipes has quit IRC | 22:51 | |
*** lorin1 has joined #openstack | 22:52 | |
*** tcampbell has quit IRC | 22:54 | |
*** NeCRoManTe has quit IRC | 22:55 | |
*** technicool has joined #openstack | 22:57 | |
*** nati has quit IRC | 22:58 | |
*** nati has joined #openstack | 22:58 | |
*** joearnold has quit IRC | 22:58 | |
*** joearnold has joined #openstack | 22:58 | |
*** jaypipes has joined #openstack | 23:01 | |
*** matiu__ has joined #openstack | 23:03 | |
*** matiu__ has quit IRC | 23:03 | |
*** dgags has quit IRC | 23:03 | |
*** matiu has quit IRC | 23:05 | |
*** willaerk has quit IRC | 23:08 | |
*** hggdh has quit IRC | 23:11 | |
*** ewindisch has joined #openstack | 23:15 | |
*** hggdh has joined #openstack | 23:16 | |
*** Vasichkin has quit IRC | 23:19 | |
*** zedas has joined #openstack | 23:20 | |
*** AhmedSoliman has quit IRC | 23:21 | |
*** nati has quit IRC | 23:24 | |
*** mihgen has joined #openstack | 23:25 | |
*** medberry is now known as med_out | 23:26 | |
*** deshantm_laptop has quit IRC | 23:26 | |
zedas | hey, what part of the code figures out it needs to load the config file: /etc/swift/account-server/4.conf | 23:29 |
*** huslage has joined #openstack | 23:31 | |
*** Ephur has quit IRC | 23:33 | |
*** mihgen has quit IRC | 23:34 | |
notmyname | zedas: swift-init calls swift.common.manager. manager.py:363 finds the conf files (I'm pretty sure this is where it gets found and called) | 23:36 |
*** joearnold has quit IRC | 23:36 | |
*** joearnold has joined #openstack | 23:36 | |
*** joearnold has quit IRC | 23:40 | |
notmyname | zedas: so `swift-init account start` ends up finding and starting all (4) account servers in the all-in-one | 23:40 |
notmyname | (called from Server.launch() in swift.common.manager) | 23:41 |
zedas | notmyname: thanks, and then that's doing a search of the tree and looking for a specific number of config files that comes from.....? | 23:45 |
notmyname | zedas: unless the number is given it finds (and therefore launches) an instance for every conf file. so 4 conf files == 4 servers launched. the number can get passed in by swift-init (-c or --config-num) | 23:49 |
*** adjohn has joined #openstack | 23:50 | |
*** tcampbell has joined #openstack | 23:53 | |
zedas | notmyname: ok great, thanks for the guidance | 23:57 |
notmyname | np | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!