Tuesday, 2012-08-07

*** matiu has joined #openstack00:00
*** matiu has joined #openstack00:00
*** nmistry has quit IRC00:06
*** halfss has joined #openstack00:07
*** robix has quit IRC00:07
*** nightcrawler786 has quit IRC00:08
*** sdake_ has quit IRC00:11
*** alop has quit IRC00:12
*** maoy has joined #openstack00:13
*** mkouhei has joined #openstack00:13
uvirtbotNew bug: #1033713 in nova "Traceback when detaching volumes when using cinder." [Undecided,New] https://launchpad.net/bugs/103371300:16
*** ewindisch has quit IRC00:16
*** kindaopsdevy has quit IRC00:18
*** kindaopsdevy has joined #openstack00:18
*** sacharya has quit IRC00:19
*** Turicas has quit IRC00:24
*** sdake has joined #openstack00:29
*** halfss has quit IRC00:30
*** desai has joined #openstack00:30
*** llang629 has quit IRC00:30
*** mislam has quit IRC00:30
*** mwichmann has quit IRC00:31
*** lipinski has joined #openstack00:32
lipinskiHaving a problem with openstack, specifically nova-network not starting dnsmasq.  Anyone here that can help?00:32
*** imsplitbit has joined #openstack00:33
*** Gordonz has joined #openstack00:33
*** RamJett has quit IRC00:35
*** johnpur has left #openstack00:35
*** lborda has joined #openstack00:36
*** miclorb_ has joined #openstack00:39
*** k0rupted has quit IRC00:39
*** warik has left #openstack00:40
*** miclorb has quit IRC00:43
*** miclorb_ has quit IRC00:44
*** gyee has quit IRC00:45
*** ejat has quit IRC00:46
*** issackelly has quit IRC00:48
*** matwood has quit IRC00:49
*** MarkAtwood has quit IRC00:53
*** datajerk has joined #openstack00:55
*** imsplitbit has quit IRC00:55
*** samkottler has quit IRC00:56
*** comptona has quit IRC00:56
uvirtbotNew bug: #1033722 in horizon "UX Improvements: Add a status indicator for Image Creation calls " [Undecided,New] https://launchpad.net/bugs/103372200:56
*** sdake has quit IRC00:57
*** jakkudanieru has quit IRC00:57
*** jakkudanieru has joined #openstack00:57
*** KavanS has quit IRC00:58
*** sdake has joined #openstack01:00
*** kindaopsdevy has quit IRC01:00
*** miclorb has joined #openstack01:00
lipinskiAnyone that can help with dnsmasq problem?01:01
*** aspiers has quit IRC01:01
DiopterWhat seems to be the problem?01:01
lipinskinova-network is no longer starting dnsmasq.01:02
lipinskiIt was for a while, and I was having a problem where it wouldnt' respond to DHCP requests.  I restarted nova-network and now it no longer starts dnsmasq.01:02
lipinskiI tried changing all kinds of things in nova.conf and dnsmasq.conf - to no avail.01:02
lipinskiI don't see anything in any /var/log/nova logs01:02
*** Gordonz has quit IRC01:03
DiopterHrm.01:04
*** matiu has quit IRC01:04
DiopterSure dnsmasq isn't running currently? Possibly the default service...01:04
Diopterpgrep -fl dnsmasq01:05
*** jdurgin has quit IRC01:05
lipinskinope - no dnsmasq processes running01:06
lipinskiI Also thought nova-network would setup the bridge too, but it's not doing that either.01:07
lipinskiIt's like nova-network is not doing everything it is supposed to.01:07
DiopterAnd nova-network.log isn't useful you said01:08
*** markmcclain has joined #openstack01:09
lipinskinot that I can tell.  I'm looking through all the debug messages as verbose=true in nova.conf.  I don't see any indications of any problems.01:09
lipinskialmost looks like nova-network scans the config file, then does some iptables stuff.  that's about it.01:11
*** aspiers has joined #openstack01:13
*** cloudvirt has joined #openstack01:15
*** marrusl has quit IRC01:16
*** oubiwann1 has quit IRC01:16
*** datajerk has quit IRC01:17
*** erkules|away has joined #openstack01:17
*** erkules has quit IRC01:19
*** zhuadl has quit IRC01:25
*** msinhore has joined #openstack01:29
*** samkottler has joined #openstack01:32
*** msinhore has quit IRC01:36
*** fikus-kukis^TP has quit IRC01:37
*** bencherian has quit IRC01:38
*** msinhore has joined #openstack01:38
*** adjohn has quit IRC01:39
*** hunglin has joined #openstack01:39
*** whenry has quit IRC01:41
*** issackelly has joined #openstack01:47
*** cloudvirt has quit IRC01:48
*** qazwsx has quit IRC01:51
*** cloudvirt has joined #openstack01:52
*** johnpostlethwait has joined #openstack01:53
*** jtran has joined #openstack01:55
*** mrjazzcat has quit IRC01:57
*** maurosr has joined #openstack01:57
*** tualatrix has joined #openstack01:58
*** matwood has joined #openstack01:58
*** jplewi has quit IRC01:58
*** MarkAtwood has joined #openstack01:59
*** livemoo has joined #openstack01:59
*** cloudvirt has quit IRC01:59
*** livemoo has quit IRC02:00
*** maurosr has quit IRC02:01
*** livemoon has joined #openstack02:02
*** nmistry has joined #openstack02:02
*** comptona has joined #openstack02:02
*** maurosr has joined #openstack02:03
*** MarkAtwood has quit IRC02:04
*** MarkAtwood has joined #openstack02:05
*** kpavel has quit IRC02:06
*** johnpostlethwait has quit IRC02:06
*** maurosr has quit IRC02:07
*** tualatrix_ has joined #openstack02:08
*** tualatrix has quit IRC02:08
*** maurosr has joined #openstack02:10
*** latnem has joined #openstack02:12
*** QRPIKE has joined #openstack02:13
*** qazwsx has joined #openstack02:13
*** whenry has joined #openstack02:13
*** maurosr has quit IRC02:15
*** miclorb has quit IRC02:18
*** sdake has quit IRC02:18
*** jakkudan_ has joined #openstack02:20
*** bilal has quit IRC02:21
*** msinhore has quit IRC02:23
*** bilal has joined #openstack02:24
*** zhuadl has joined #openstack02:24
*** Ryan_Lane has quit IRC02:24
*** jakkudanieru has quit IRC02:24
*** maurosr has joined #openstack02:27
*** jackh has joined #openstack02:28
latnemhi guys, has anyone had problems with rabbitmq and nova client? I seem to endlessly be getting 401's when I try to use nova client to connect to the rackspace cloud and return a list of images. I can run the same piece of code in the python shell without any problems at all.02:30
*** aspiers has quit IRC02:31
*** maurosr has quit IRC02:32
*** ayoung has quit IRC02:33
*** colinmcnamara has joined #openstack02:33
*** mjfork has quit IRC02:34
*** zehicle has joined #openstack02:34
*** rkukura has quit IRC02:35
*** rkukura has joined #openstack02:36
*** kindaopsdevy has joined #openstack02:36
*** whenry has quit IRC02:37
*** tserong_ is now known as tserong02:38
*** issackelly has quit IRC02:38
*** issackelly has joined #openstack02:38
*** jakkudan_ has quit IRC02:41
*** jakkudanieru has joined #openstack02:41
*** s0mik has joined #openstack02:41
*** jakkudanieru has quit IRC02:42
*** sunxin has joined #openstack02:43
*** aspiers has joined #openstack02:43
*** tongli has joined #openstack02:44
*** sdake has joined #openstack02:45
*** tgall_foo has joined #openstack02:46
*** tgall_foo has quit IRC02:46
*** tgall_foo has joined #openstack02:46
*** dolphm has joined #openstack02:47
*** msinhore has joined #openstack02:47
*** bharata has joined #openstack02:51
*** pixelbeat has quit IRC02:51
*** sunxin_ has joined #openstack02:51
*** roaet has quit IRC02:52
*** sunxin has quit IRC02:53
*** dolphm has quit IRC02:53
*** edude03 has joined #openstack02:53
*** nati_ueno has quit IRC02:54
*** wall has quit IRC02:55
*** bencherian has joined #openstack02:57
*** sunxin__ has joined #openstack02:58
*** sunxin_ has quit IRC03:00
*** tongli has quit IRC03:03
*** kindaopsdevy has quit IRC03:03
*** msinhore has quit IRC03:06
*** colinmcnamara has quit IRC03:07
*** sunxin__ has quit IRC03:07
*** clopez has quit IRC03:08
*** ewindisch has joined #openstack03:08
cooljlatnem: 401 means it's not authenticating properly...can you pastebin your code (sanitized of course)?03:08
latnemits cool…I just solved it a few minutes ago. The problem was my shell was picking up the environment vars and my queue didn't have the environment vars loaded.03:09
cooljright on03:10
*** colinmcnamara has joined #openstack03:10
*** lloydde has joined #openstack03:10
*** msinhore has joined #openstack03:11
*** supriya has joined #openstack03:11
*** ryanpetrello has quit IRC03:12
*** datajerk has joined #openstack03:12
*** lloydde has quit IRC03:13
*** tuf8 has quit IRC03:13
*** ryanpetrello has joined #openstack03:13
*** msinhore has quit IRC03:16
*** localhost has joined #openstack03:16
*** dolphm has joined #openstack03:18
*** jakkudanieru has joined #openstack03:20
*** jakkudanieru has quit IRC03:21
*** retr0h has quit IRC03:22
*** jakkudanieru has joined #openstack03:24
*** s0mik has quit IRC03:25
*** jakkudan_ has joined #openstack03:25
*** Blackavar has quit IRC03:25
*** s0mik has joined #openstack03:27
*** s0mik has quit IRC03:28
*** jakkudanieru has quit IRC03:28
*** dolphm has quit IRC03:34
*** Sweetshark has joined #openstack03:39
*** Sweetshark has joined #openstack03:39
*** ewindisch has quit IRC03:41
*** edygarcia has joined #openstack03:42
*** Sweetsha1k has quit IRC03:42
*** arrsim` has quit IRC03:44
*** arrsim` has joined #openstack03:44
*** adjohn has joined #openstack03:45
*** miclorb has joined #openstack03:47
*** edygarcia has quit IRC03:48
*** pergaminho has joined #openstack03:49
uvirtbotNew bug: #1033757 in tempest "Tests in test_server_basic_ops.py hang while waiting for keyring password" [Critical,New] https://launchpad.net/bugs/103375703:51
*** hunglin has left #openstack03:52
*** rnorwood has joined #openstack03:52
*** koolhead17 has joined #openstack03:55
*** adjohn has quit IRC03:58
*** aspiers has quit IRC04:01
*** sacharya has joined #openstack04:01
*** colinmcnamara has quit IRC04:05
*** zhuadl has quit IRC04:06
*** miclorb_ has joined #openstack04:10
*** chrisgerling has joined #openstack04:10
*** miclorb has quit IRC04:10
*** dualism has joined #openstack04:12
*** secbitchris has quit IRC04:13
*** aspiers has joined #openstack04:14
*** infernix has quit IRC04:14
*** jakkudan_ has quit IRC04:14
*** jakkudanieru has joined #openstack04:15
*** chrisgerling has quit IRC04:15
*** ejat has joined #openstack04:16
*** ejat has joined #openstack04:16
*** miclorb_ has quit IRC04:16
*** dolphm has joined #openstack04:17
*** tualatrix_ has quit IRC04:17
*** tualatrix has joined #openstack04:18
*** llang629 has joined #openstack04:18
*** dirakx has joined #openstack04:19
*** vogxn has joined #openstack04:19
*** miclorb has joined #openstack04:20
*** primeministerp has quit IRC04:20
*** primemin1sterp has joined #openstack04:20
*** ryanpetrello has quit IRC04:22
*** dirakx has quit IRC04:23
*** primemin1sterp has quit IRC04:25
*** primeministerp has joined #openstack04:25
*** koolhead17 has quit IRC04:26
*** nRy has joined #openstack04:29
*** roaet has joined #openstack04:33
*** markmcclain has quit IRC04:39
*** livemoon has quit IRC04:39
*** colinmcnamara has joined #openstack04:44
*** roaet has quit IRC04:47
*** colinmcnamara has quit IRC04:47
*** deepakcs has joined #openstack04:48
*** supriya has quit IRC04:48
*** zhuadl has joined #openstack04:48
*** colinmcnamara has joined #openstack04:48
*** roge has quit IRC04:49
*** jtran has quit IRC04:50
*** Ryan_Lane has joined #openstack04:51
*** samkottler has quit IRC04:53
*** k4n0 has joined #openstack04:55
*** QRPIKE has quit IRC04:56
*** lborda has quit IRC04:58
*** QRPIKE has joined #openstack04:58
*** Blackavar has joined #openstack05:01
*** rocambole has joined #openstack05:02
*** dolphm has quit IRC05:03
*** nRy has quit IRC05:03
*** tgall_foo has quit IRC05:08
*** maoy has quit IRC05:09
*** steveb_ has quit IRC05:13
*** mrunge has joined #openstack05:15
*** jog0 has quit IRC05:15
*** vivek has quit IRC05:16
*** jtran has joined #openstack05:17
*** _et has quit IRC05:21
*** shaon has joined #openstack05:22
*** garyk has joined #openstack05:23
*** ejat has quit IRC05:24
*** rnorwood has quit IRC05:25
*** Turicas has joined #openstack05:29
*** retr0h has joined #openstack05:30
*** retr0h has joined #openstack05:30
*** aspiers has quit IRC05:31
*** shaon has quit IRC05:32
*** epim has quit IRC05:32
*** nRy has joined #openstack05:33
jasonozhmm05:33
*** sacharya has quit IRC05:36
*** arBmind has joined #openstack05:42
*** zigo has joined #openstack05:42
*** k4n0 has quit IRC05:43
*** aspiers has joined #openstack05:43
*** tualatrix has joined #openstack05:43
*** miclorb has quit IRC05:48
prometheanfirehrmmm05:48
*** miclorb_ has joined #openstack05:48
*** Glace_ has joined #openstack05:49
*** osier has joined #openstack05:49
*** Glace has quit IRC05:50
*** ondergetekende has joined #openstack05:50
*** zodiak has quit IRC05:50
*** shang has quit IRC05:53
*** jakkudanieru has quit IRC05:53
*** jakkudanieru has joined #openstack05:53
*** shang has joined #openstack05:54
*** miclorb_ has quit IRC05:58
*** miclorb has joined #openstack05:58
*** hattwick has quit IRC06:00
*** shaon has joined #openstack06:01
*** ejat has joined #openstack06:01
*** nRy has quit IRC06:06
*** Turicas has quit IRC06:06
*** nRy has joined #openstack06:07
*** melmoth has joined #openstack06:09
*** latnem has quit IRC06:09
*** jakkudanieru has quit IRC06:09
*** jakkudanieru has joined #openstack06:10
*** jtran has quit IRC06:18
*** prometheanfire has quit IRC06:20
*** pvankouteren has joined #openstack06:22
*** miclorb has quit IRC06:22
*** adjohn has joined #openstack06:22
*** miclorb has joined #openstack06:24
*** zodiak has joined #openstack06:25
*** never2far has joined #openstack06:25
*** PiotrSikora has quit IRC06:26
*** PiotrSikora has joined #openstack06:27
*** colinmcnamara has quit IRC06:32
*** littleidea has quit IRC06:35
*** aspiers has quit IRC06:36
*** aspiers has joined #openstack06:39
kviiriHello Stackers06:39
*** guigui3 has joined #openstack06:40
*** prometheanfire has joined #openstack06:43
QRPIKEyoo06:44
*** rpawlik has joined #openstack06:46
*** rpawlik has quit IRC06:54
*** rpawlik has joined #openstack06:54
uvirtbotNew bug: #1033829 in horizon ""Launch instance" button could be enabled based on headroom" [Undecided,New] https://launchpad.net/bugs/103382906:55
*** prakasha-log has quit IRC07:00
*** prakasha-log has joined #openstack07:00
*** aspiers has quit IRC07:01
*** vila has joined #openstack07:01
*** EmilienM has joined #openstack07:03
*** kpavel has joined #openstack07:04
*** davepigott has joined #openstack07:04
*** jamespage has joined #openstack07:06
*** jakkudanieru has quit IRC07:06
*** jakkudanieru has joined #openstack07:06
*** reidrac has joined #openstack07:10
*** aspiers has joined #openstack07:13
*** _et has joined #openstack07:14
*** arBmind has quit IRC07:15
*** nRy has quit IRC07:21
*** Glace_ has quit IRC07:22
*** erikzaadi has joined #openstack07:23
*** miclorb has quit IRC07:29
*** dev_sa has joined #openstack07:29
*** arBmind has joined #openstack07:29
*** johnpostlethwait has joined #openstack07:31
*** iNdefiNite has quit IRC07:32
*** alex88 has joined #openstack07:35
*** kpavel_ has joined #openstack07:41
*** disposab1e has joined #openstack07:43
*** lynxman- has joined #openstack07:43
*** kpavel has quit IRC07:43
*** tru_tru has quit IRC07:43
*** salgado has quit IRC07:43
*** disposable has quit IRC07:43
*** agoddard has quit IRC07:43
*** rods` has quit IRC07:43
*** oubiwann has quit IRC07:43
*** n0ano has quit IRC07:43
*** huats has quit IRC07:43
*** aryan has quit IRC07:43
*** coolj has quit IRC07:43
*** andyhky` has quit IRC07:43
*** cp16net has quit IRC07:43
*** smoser has quit IRC07:43
*** mancdaz has quit IRC07:43
*** lynxman has quit IRC07:43
*** sc68cal has quit IRC07:43
*** ninkotech__ has quit IRC07:43
*** udagawa has quit IRC07:43
*** pandemicsyn has quit IRC07:43
*** ron-slc has quit IRC07:43
*** nikhil has quit IRC07:43
*** DuncanT has quit IRC07:43
*** arrsim has quit IRC07:43
*** mikalv has quit IRC07:43
*** cipriano has quit IRC07:43
*** Dirkpitt has quit IRC07:43
*** rturk has quit IRC07:43
*** Blake_Yeager has quit IRC07:43
*** chasmo has quit IRC07:43
*** xtoddx has quit IRC07:43
*** Spirilis has quit IRC07:43
*** kpavel_ is now known as kpavel07:43
*** amotoki_ has joined #openstack07:44
*** swarley has joined #openstack07:44
*** steveb_ has joined #openstack07:45
*** ZtF has joined #openstack07:45
*** bencherian has quit IRC07:47
*** amotoki has quit IRC07:47
*** nmistry has quit IRC07:48
*** bencherian has joined #openstack07:48
*** agoddard has joined #openstack07:49
*** bencherian has quit IRC07:52
*** kpavel has quit IRC07:55
*** johnpostlethwait has quit IRC07:57
*** p3N74d4V1D has joined #openstack07:58
p3N74d4V1Dheya07:58
kviiriHi07:59
melmothholaaa08:01
*** UICTamale has quit IRC08:01
*** adjohn has quit IRC08:02
*** kpavel has joined #openstack08:05
*** darraghb has joined #openstack08:05
p3N74d4V1Dhola melmoth08:06
p3N74d4V1D:)08:06
*** derekh has joined #openstack08:08
*** tru_tru has joined #openstack08:11
*** salgado has joined #openstack08:11
*** rods` has joined #openstack08:11
*** oubiwann has joined #openstack08:11
*** n0ano has joined #openstack08:11
*** huats has joined #openstack08:11
*** aryan has joined #openstack08:11
*** coolj has joined #openstack08:11
*** andyhky` has joined #openstack08:11
*** cp16net has joined #openstack08:11
*** smoser has joined #openstack08:11
*** mancdaz has joined #openstack08:11
*** sc68cal has joined #openstack08:11
*** ninkotech__ has joined #openstack08:11
*** udagawa has joined #openstack08:11
*** pandemicsyn has joined #openstack08:11
*** ron-slc has joined #openstack08:11
*** nikhil has joined #openstack08:11
*** DuncanT has joined #openstack08:11
*** arrsim has joined #openstack08:11
*** mikalv has joined #openstack08:11
*** cipriano has joined #openstack08:11
*** Dirkpitt has joined #openstack08:11
*** rturk has joined #openstack08:11
*** Blake_Yeager has joined #openstack08:11
*** chasmo has joined #openstack08:11
*** xtoddx has joined #openstack08:11
*** Spirilis has joined #openstack08:11
*** hubbard.freenode.net sets mode: +v pandemicsyn08:11
*** erikzaadi has quit IRC08:13
*** janisg has joined #openstack08:14
*** kashyap has joined #openstack08:15
*** erikzaadi has joined #openstack08:17
*** mkouhei has left #openstack08:17
*** QRPIKE has quit IRC08:18
*** vogxn has quit IRC08:18
*** _et has quit IRC08:19
*** joebaker has quit IRC08:19
*** UICTamale has joined #openstack08:25
*** alex88 has quit IRC08:26
*** k3rn has quit IRC08:28
*** tualatrix has quit IRC08:28
*** iNdefiNite has joined #openstack08:32
*** qazwsx has joined #openstack08:34
*** Neptu has joined #openstack08:35
*** itz_ has joined #openstack08:39
*** SEVMEK46 has joined #openstack08:40
*** k3rn has joined #openstack08:42
*** k3rn has joined #openstack08:42
*** aspiers has quit IRC08:42
*** reed has joined #openstack08:42
*** pixelbeat has joined #openstack08:42
*** hattwick has joined #openstack08:43
*** SEVMEK46 has quit IRC08:44
p3N74d4V1Dstill working on getting VMs on different nodes to communicate08:50
p3N74d4V1DI notice when I send ARP requests08:50
p3N74d4V1Dcrafted ones08:50
p3N74d4V1Dall the hosts on my VM network receive them08:51
melmothsounds "normal" to me08:51
itz_Hi,I'm trying to run quantum server The last line  I get is LookupError: URI scheme not known: 'call' (from egg, config)  http://pastebin.com/WwZP1gLV08:51
melmoththat s what the bridge is there for,no ?08:51
p3N74d4V1DI mean the controller, the compute...08:51
melmothahh08:51
p3N74d4V1Dthe VMs too :)08:51
p3N74d4V1Dyeah08:51
melmoththat does not sounds normal to me :)08:51
p3N74d4V1Dwhat???08:52
melmothp3N74d4V1D, i do not undertsand why some boxes such as the ocntroler not plugged to the static network bridhe would receibe the arp packet08:52
melmothitz_, looks like some paste config misconfiguration08:52
p3N74d4V1Dmelmoth, I use just one interface for everything08:53
melmothahhh08:53
melmothok. then i can understand08:54
*** alex88 has joined #openstack08:54
*** alex88 has quit IRC08:54
*** alex88 has joined #openstack08:54
melmothp3N74d4V1D, have you seen the recent blog post about networking ?08:54
melmothhttp://www.mirantis.com/tag/networking/08:54
p3N74d4V1Dnot yet08:54
melmothyou may wish to give it a read with a coffee or two (or three)08:54
p3N74d4V1Dloool08:54
*** aspiers has joined #openstack08:55
Dieterbewhen uploading blobs into swift, do you need to know the size in advance? or can you just keep uploading until you have no more data?08:55
melmothitz_,  this is what it makes me think about http://pythonpaste.org/deploy/08:56
itz_melmoth: http://pastebin.com/BdRYcR8208:56
janisgwhat can be the problem if the dnsmasq is not getting spawned on compute node in multihost configuration with FlatDHCP ?08:56
janisgi copied working config to new node, with slightly newer nova version and it's not working anymore08:57
melmothlooks like a problem in your composite pipeline, but i do not know pastedeploy well enougy to tell what the problem is08:57
melmothjanisg, is nova-network running on the node ?08:57
janisgyes08:57
janisgi compared log outputs08:57
janisgon both nodes08:57
janisgand few calls are missing08:57
janisgthe ones that start up dnsmasq08:58
p3N74d4V1Dmelmoth is that your blog?08:58
melmothno no no08:58
* p3N74d4V1D make some coffee08:58
p3N74d4V1Dhehe just asking08:58
melmothit s a blog of someone who knows what they are talking about :)08:58
melmothi dont know the guys, i just saw them posted in the openstack community letter. and found it quite cool08:58
trygvisyeah, that blog was very nice09:00
janisgmelmoth, any thoughts what could I check?09:00
*** koolhead17 has joined #openstack09:01
*** lynxman- is now known as lynxman09:01
melmothnot sure..but i guess the nova-neworks logs, and may be set nova in verbose debug mode .09:01
*** lynxman has quit IRC09:01
*** lynxman has joined #openstack09:01
*** matwood has quit IRC09:02
*** jackh has quit IRC09:02
janisgwell, the logs are note complaining09:03
janisgdnsmasq is simpoly not started09:03
*** s34n has quit IRC09:04
trygvisI've had that happen to me if I had the wrong network configuration09:04
trygvisnetwork as in the list of networks configured09:04
janisgyou mean09:04
janisgnetworks that show up in  nova-manage network list09:05
trygvisyep09:05
*** MarkAtwood has quit IRC09:06
*** s34n has joined #openstack09:06
kviiriWhen using Quantum with OpenVSwitch plugin, is it well and healthy for ovs-vsctl list-ports br-int to output ports like tapXXXX...?09:07
kviiri(where X are hex digits)09:07
janisgtrygvis, maybe you know how to prevent vlanId automatically assigned when creating new network09:07
janisgbecause it takes vlan 100 by default, by I don't need any vlans in dhcp setup09:08
trygvissorry, don't use vlans09:08
koolhead17hi all09:08
kviiriHey09:08
*** kmwhite has quit IRC09:09
*** tomoe_ has quit IRC09:09
*** aloga_ has quit IRC09:09
janisgwell maybe that vlan which is assigned by default is causing the problem09:10
*** dachary1 is now known as dachary09:11
*** dachary has joined #openstack09:11
janisgnova-manage network create private --multi_host=T --fixed_range_v4=172.16.1.0/24 --bridge_interface=br100 --num_networks=1 --network_size=25609:11
*** aloga has joined #openstack09:11
janisgafter this i got vlan id set to 10009:11
*** shang_ has joined #openstack09:12
*** shang has quit IRC09:15
*** Triade has joined #openstack09:16
janisgohh i found what was causing this default vlan09:17
janisgexecution on the wrong host where nova.conf don't have network_manager defined, and nova-manage uses default setting which is vlan09:18
*** maploin has joined #openstack09:19
*** maploin has joined #openstack09:19
*** MyAzhax has quit IRC09:28
*** MyAzhax has joined #openstack09:28
*** bbcmicrocomputer has joined #openstack09:32
*** simon_lucy has joined #openstack09:32
*** SkyMan has joined #openstack09:35
SkyManzynzel: Hi man are you there ??09:35
itz_melmoth: Had old version of paste09:36
*** ServerTechLaptop has joined #openstack09:37
*** ejat has quit IRC09:37
SkyManHI Stackers , does anyone here have a good background with PowerDNS ??09:38
SkyMani kind of need some guidance :)09:39
*** kyriakos has joined #openstack09:39
zynzelSkyMan: what do you need?09:41
SkyManzynzel: thx for replying, i installed powerdns/poweradmin and i need to know how i can map hostnames to ips so that i can access my VMs using hostname and not IPs09:42
zynzelyou should create view in nova db09:43
zynzelwhich map floating_ip->domain and domain->floating_ip09:43
SkyMani know about that but since i am a beginner i want to do it manually09:44
SkyManzynzel: lets go private plz09:44
janisgtrygvis, you were right09:46
janisgthe main problem was with ip configs09:46
janisgand some livbirt sock access error09:46
trygvis\o/09:46
janisgyea09:46
p3N74d4V1Dhey melmoth, please can I see the routing table of your compute and controller nodes?09:49
p3N74d4V1Danyone using multi-host set up?09:49
*** clopez has joined #openstack09:50
janisgyes09:51
p3N74d4V1Djanisg: can you paste09:52
p3N74d4V1Dthe routing table09:52
p3N74d4V1Dof your controller and compute?09:52
*** goldfish has joined #openstack09:53
*** rods has joined #openstack09:54
*** oneiroi|gone is now known as oneiroi09:55
*** simon_lucy has quit IRC09:57
*** aspiers has quit IRC09:57
*** aspiers has joined #openstack09:59
melmothp3N74d4V1D, http://paste.openstack.org/show/20003/10:02
melmoth(it s single host  mode)10:02
melmothbut nova-networks run on compute110:02
*** davepigott_ has joined #openstack10:02
p3N74d4V1Daight thanks10:04
*** Breaking_Pitt has joined #openstack10:05
*** saju_m has joined #openstack10:05
*** davepigott has quit IRC10:05
*** davepigott_ is now known as davepigott10:05
p3N74d4V1Dmelmoth the difference with my route, is that everything goes on the bridge and I trust the bridge to properly dispatch traffic10:07
p3N74d4V1Dbut seems not to work10:07
*** davepigott_ has joined #openstack10:10
*** mattux_ has quit IRC10:11
*** mattux_ has joined #openstack10:11
*** agy_ is now known as agy10:12
*** mjfork has joined #openstack10:12
*** davepigott has quit IRC10:12
*** davepigott_ is now known as davepigott10:12
*** steveb_ has quit IRC10:14
melmothworks ? dont touch anything :)10:15
p3N74d4V1Dno :10:15
uvirtbotNew bug: #1033903 in openstack-manuals "ring documentation confusing with regards to 'partition'" [High,Triaged] https://launchpad.net/bugs/103390310:16
*** QRPIKE has joined #openstack10:19
*** wiliam_ has quit IRC10:21
*** wiliam has quit IRC10:21
*** saju_m has quit IRC10:22
*** QRPIKE has quit IRC10:22
*** zhuadl has quit IRC10:22
*** swarley has quit IRC10:25
*** primozf has joined #openstack10:27
*** QRPIKE has joined #openstack10:28
*** p3N74d4V1D has quit IRC10:32
*** SkyMan has quit IRC10:39
uvirtbotNew bug: #1033915 in nova "Qpid: Deleted fanout queues still open too many journal files" [Undecided,New] https://launchpad.net/bugs/103391510:45
*** deepakcs has quit IRC10:46
*** bsza has joined #openstack10:48
*** dev_sa has quit IRC10:48
*** dev_sa has joined #openstack10:51
*** miclorb has joined #openstack10:53
davepigottjamespage: ping11:04
davepigottjamespage: I can now relax. It's all working. :)11:04
davepigottjamespage: Was missing "multi_nopde=true" in nova.conf. Didn't know about that, and would have assumed that to be a default, but I can't tell you how good it feels to have a reliable cloud at last.11:05
*** _et has joined #openstack11:05
*** samkottler has joined #openstack11:07
*** mrunge has quit IRC11:10
*** ijw has quit IRC11:11
*** DavidLevin has joined #openstack11:11
*** ijw has joined #openstack11:11
*** Rajesh has joined #openstack11:13
*** Rajesh is now known as Guest7894711:14
*** maoy has joined #openstack11:16
*** maploin has quit IRC11:18
*** miclorb has quit IRC11:18
*** tomoe_ has joined #openstack11:20
*** jkordish has joined #openstack11:25
*** Trixboxer has joined #openstack11:27
*** rpawlik has quit IRC11:27
*** msavy has joined #openstack11:28
*** maploin has joined #openstack11:31
*** maploin has joined #openstack11:31
uvirtbotNew bug: #1033933 in nova "product_version is missing from software_version on xcp-xapi" [Undecided,In progress] https://launchpad.net/bugs/103393311:31
*** jakkudan_ has joined #openstack11:37
*** Guest78947 has quit IRC11:37
*** jakkudan_ has quit IRC11:37
*** jakkudan_ has joined #openstack11:38
*** hggdh has quit IRC11:38
*** hggdh has joined #openstack11:39
*** jakkudanieru has quit IRC11:40
*** kiffer84 has quit IRC11:46
*** maurosr has joined #openstack11:47
*** milner has joined #openstack11:48
*** supriya has joined #openstack11:51
*** bharata has quit IRC11:52
*** osier has quit IRC11:52
*** desai has quit IRC11:52
*** markvoelker has joined #openstack11:53
*** roaet has joined #openstack11:54
*** lts has joined #openstack11:55
*** rmartinelli has joined #openstack11:55
*** kpavel has quit IRC11:57
*** kpavel has joined #openstack11:58
*** lipinski has left #openstack11:59
*** roaet has quit IRC11:59
*** dwcramer has quit IRC12:02
*** p3N74d4v1D has joined #openstack12:02
*** simon_lucy has joined #openstack12:02
kviiriHello, I need help with DHCP12:05
Drakizin general ?12:06
kviiriRegardin OpenStack and Quantum12:06
vachonanyone know how exactly to configure for kvm block migration?12:06
vachonsince the docs dont really exist12:07
kviiriVM is sending DHCP requests. The requests show up on the correct interface on the Compute node, but don't arrive on the other side12:08
*** dprince has joined #openstack12:08
vachonare you allowing dhcp in the security group?12:08
vachonsounds like an ingress block12:09
kviiriThat shouldn't be the problem because DHCP worked before I installed Quantum.12:09
*** amotoki has joined #openstack12:10
kviiriA coworker guessed that it's probably an issue with the virtual networks but I'm fairly clueless on how to debug/fix it12:10
*** jkordish has quit IRC12:11
*** gtirloni has joined #openstack12:12
*** antenagora has joined #openstack12:12
*** amotoki_ has quit IRC12:13
*** h0cin has joined #openstack12:14
*** h0cin has joined #openstack12:14
vachoncan you span the port on the switch?12:15
vachonthat is your best bet12:16
kviiriI'm unfamiliar with that stuff :/12:17
vachonwhats your switch?12:17
*** ahasenack has joined #openstack12:18
*** Breaking_Pitt is now known as Breaking_Afk12:19
*** simon_lucy has quit IRC12:19
kviiriHP ProCurve12:19
vachoncli access?12:19
kviiriI don't have CLI access personally12:20
kviiriI could find someone who has though12:20
*** Glace_ has joined #openstack12:20
*** littleidea has joined #openstack12:20
vachonyour best bet is, get to the switch12:20
vachonplug in your laptop12:20
vachonspan the port12:20
vachonand use wireshark12:20
vachonor get a netadmin to do something similar directly in the switch12:21
kviiriTo find out whether the DHCP packets arrive to the switch, right?12:21
vachonyes12:21
kviiriAlrighty12:21
kviiriThanks12:21
vachonto basically trace the patch12:21
vachon*path12:21
*** bsza has quit IRC12:22
*** littleidea has quit IRC12:22
*** chrisfer has joined #openstack12:23
*** bsza has joined #openstack12:23
*** dualism has quit IRC12:25
*** secbitchris has joined #openstack12:26
*** lorin1 has joined #openstack12:26
uvirtbotNew bug: #1033960 in nova "Traceback when attaching volunes on quantal" [Undecided,In progress] https://launchpad.net/bugs/103396012:31
*** wiliam has joined #openstack12:31
*** eglynn is now known as hungry-eglynn12:33
_val_Hi everyone. I'm just reading about ceph. Before I continue reading. What's the difference or can ceph work without swift? ..12:35
_val_Hello. koolhead1712:35
*** ZtF has quit IRC12:36
koolhead17hello _val_12:36
*** imsplitbit has joined #openstack12:36
uvirtbotNew bug: #1033963 in nova "booting instance w/ metadata fails on XenServer" [Critical,In progress] https://launchpad.net/bugs/103396312:36
_val_ koolhead17 reading about ceph12:37
_val_I would like to implement ceph12:37
_val_Now I'm running nova-volume service only.. just trying to understand the concept.12:38
*** halfss has joined #openstack12:39
*** cloudvirt has joined #openstack12:39
*** rkukura has quit IRC12:40
melmoth_val_, i know nothing about ceph, but i have been told that its 1) like swift, an object store (bucket thingy), with other feature such as  2) a block device stuff and 3) a file system12:41
janisgthere is tons of info about ceph12:41
janisgon sebastians blog12:42
*** tgall_foo has joined #openstack12:42
*** tgall_foo has quit IRC12:42
*** tgall_foo has joined #openstack12:42
janisghttp://www.sebastien-han.fr/blog/archives/12:42
*** QRPIKE has quit IRC12:42
_val_melmoth: thank you. I'm reading about ceph and really would like to implement it as it seems much more easier to implement that swift.12:42
janisghttp://www.sebastien-han.fr/blog/categories/ceph/12:42
_val_janisg: thank god it's in english :)12:43
_val_Thanks for the link.12:43
janisgit's quite good though12:43
janisghe has done even perfmonace tests12:44
_val_janisg: do you have any background information about Ceph vs Swift?12:44
*** huats has quit IRC12:44
*** sacharya has joined #openstack12:44
janisgno ;>12:44
janisghaven't got that far12:44
janisgi have tried swift for 10 mins12:45
janisgthat's all12:45
_val_To me, Ceph seems to be much more easy to set up. I'm concerned about data integrity. How secure is it? Is data duplicated over different storage blocks over different ceph servers..12:45
koolhead17_val_, nice12:45
*** acadiel has joined #openstack12:47
_val_koolhead17: First I need to be able to identify the pro's and con's regarding swift vs ceph in terms of easyness in implementation, data-integrity, scalability etc..12:47
koolhead17_val_, a blog once your done with your study would be great :P12:48
_val_koolhead17: sure. I've reserved another section for Ceph :)12:48
acadiel*wave* Hi, everyone... trying to load up Openstack on CentOS, and saw this channel existed.  Had to bring in personal laptop on open WiFi to get on IRC (corporate network = no IRC)12:48
koolhead17acadiel, :)12:49
_val_Basics of how to setup glance.. is done.12:49
*** deepakcs has joined #openstack12:50
*** msinhore has joined #openstack12:50
*** maoy has quit IRC12:50
koolhead17_val_, u  mean glance using ceph 4 image store12:51
_val_koolhead17: no just a basic setup of glance.12:52
_val_Without ceph interference. Ceph has still to come. First some pre-research12:52
*** FlorianOtel has joined #openstack12:53
koolhead17Bwahhhh look who is here FlorianOtel12:53
FlorianOtelyeah.12:53
*** ewindisch has joined #openstack12:53
FlorianOtellook what the cat dragged in :)12:53
vachonstupid question, but would --domain=foo in nova.conf change dnsmasq?12:53
acadielSo, I'm following this tutorial:  http://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL  - I have an "instance" (under "Launch an Instance") - I can virsh list it and nova list it.  I can "ping" my demobr network, 192.168.0.1.  However, I can't ssh to the instance nor ping it from the actual hypervisor/all-in-one Openstack system.  Thoughts on what I need to do?12:54
acadielit says it's 192.168.0.212:54
koolhead17FlorianOtel, welcome back!! :P12:54
FlorianOtelthanks. Time to kick around some source code -- been lost to the "Dark Side" (as Kevin puts it) for too long :((12:55
FlorianOtel2.5 months, to be exact :(12:55
vachonnvm my question, found it in source12:55
*** troytoman-away is now known as troytoman12:56
lorin1acadiel: have you looked at the console-log of the instance to make sure it booted correctly?12:56
lorin1acadiel: Also, are the security groups set properly to allow ping and ssh to the instance?12:57
acadielnova console-log myserver (from the tutorial) comes up blank12:57
*** sacharya has quit IRC12:58
acadielI'm guessing the tutorial set up the groups correctly... I'm just starting on figuring this out since OSCON kind of kicked me in gear :)12:58
*** edygarcia has joined #openstack12:58
acadielI'm still a bit fuzzy on how everything fits together... trying to dive in, read, and learn lessons while I go :)12:59
*** edygarcia has quit IRC12:59
lorin1acadiel: I'm just looking at http://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL#Nova_Network_Setup and I don't see the lines where security group stuff is set.12:59
lorin1acadiel: Should be done like this: http://docs.openstack.org/essex/openstack-compute/install/apt/content/running-an-instance.html#security-groups12:59
*** msinhore has quit IRC12:59
acadielLet me check, lorin13:00
lorin1acadiel: Although it's possible Fedora  has some magical tools for setting these, they have some openstack helper scripts I'm not familiar with.13:00
acadielI also tried to virsh console 1 to get to it.. just dumped me back to the command line13:00
acadielshell I mean13:00
lorin1acadiel: The fact that the console log is blank sounds a little troubling.13:01
lorin1acadiel: Another option is to use vnc to connect to the console, not sure if it works through virsh with the libvirt settings that OpenStack uses.13:01
*** edygarcia has joined #openstack13:01
acadielYep, security group says "default/default"13:02
*** pergaminho has joined #openstack13:03
*** marrusl has joined #openstack13:03
acadiellet me add the ssh per the KB13:03
acadielthe ssh rule I mean13:03
*** roaet has joined #openstack13:04
*** supriya has quit IRC13:04
*** aliguori has joined #openstack13:05
*** aliguori_ has joined #openstack13:05
*** aliguori has quit IRC13:05
*** aliguori_ has quit IRC13:05
*** edude03 has quit IRC13:05
*** roge has joined #openstack13:05
acadielno route to host now when I try to ssh - must be a bridge issue13:05
*** edygarcia has quit IRC13:05
acadielbut I'm getting further now :)13:05
*** Neptu has quit IRC13:06
*** edygarcia has joined #openstack13:06
*** supriya has joined #openstack13:06
*** desai has joined #openstack13:07
*** hungry-eglynn is now known as eglynn13:08
*** tserong has quit IRC13:08
*** aliguori has joined #openstack13:08
lorin1acadiel: Check the syslog on the controller to see if the DHCP request was received and that that an IP was offered back.13:08
lorin1acadiel: Looks something like this:13:08
lorin1Aug  5 14:29:01 precise64 dnsmasq-dhcp[18541]: read /var/lib/nova/networks/nova-br100.conf13:08
lorin1Aug  5 14:29:41 precise64 dnsmasq-dhcp[18541]: DHCPDISCOVER(br100) fa:16:3e:74:b4:ec13:08
lorin1Aug  5 14:29:41 precise64 dnsmasq-dhcp[18541]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:74:b4:ec13:08
*** markmcclain has joined #openstack13:09
*** jaypipes has joined #openstack13:09
*** tserong has joined #openstack13:09
*** sandywalsh has joined #openstack13:10
acadielI see this in the .conf:  macaddr,myserver.novalocal,192.168.0.2root@myhypervisor.domain.org13:10
*** cryptk is now known as cryptk|offline13:10
acadielnova list does show the 192.168.0.2 but the .conf file doesn't show any DHCPDISCOVER or DHCPOFFER13:11
*** dwcramer has joined #openstack13:11
lorin1acadiel: That should be in /var/log/syslog or /var/log/messages or wherever it is that Fedora puts the syslog13:12
*** mrjazzcat has joined #openstack13:12
*** vmlinuz has joined #openstack13:12
*** DavidLevin has quit IRC13:12
*** DuncanT has quit IRC13:13
*** DuncanT has joined #openstack13:13
*** datajerk has quit IRC13:13
acadielI see a dnsmasq-dhcp (DHCP, static leases only to 192.168.0.2) entry in messages...13:13
*** ejat has joined #openstack13:14
*** ejat has joined #openstack13:14
*** datajerk has joined #openstack13:14
acadiellet me try manually adding a route to the routing table to go to 192.168.0.113:14
*** sandywalsh has quit IRC13:15
*** lazyshot has joined #openstack13:15
*** lborda has joined #openstack13:17
*** edygarcia_ has joined #openstack13:18
*** tongli has joined #openstack13:19
*** dolphm has joined #openstack13:19
*** natea has joined #openstack13:20
*** msinhore has joined #openstack13:21
*** jathanism is now known as zz_jathanism13:21
*** edygarcia has quit IRC13:21
kviirivachon: Found the problem13:22
*** ejat has quit IRC13:22
kviirivachon: It was a VLAN issue13:22
vachonthat would do it13:22
*** edygarcia has joined #openstack13:23
*** edygarcia_ has quit IRC13:23
*** dwcramer has quit IRC13:23
kviiriIt's using a VLAN that doesn't correspond to any actually existing Virtual network13:23
vachonso it looks like instance_dns_domain (in nova.conf) has no effect on the dnsmasq process.  Am I missing where it should be or do I have the wrong flag?13:24
*** natea has quit IRC13:25
vachonugh, it is in nova13:25
vachonwhy didnt i look there13:25
*** lipinski1 has joined #openstack13:26
kviiriAny idea why an instance would send DHCP requests to VLAN 4095?13:26
lipinski1How to troubleshoot nova-network not starting dnsmasq?13:26
*** cloudvirt has quit IRC13:26
vachonkviiri: thats the reserved vlan13:26
kviirivachon: Reserved?13:27
vachonits considered a discard vlan13:27
*** KarinLevenstein has joined #openstack13:27
vachonbut occastionally people use it for other things13:27
kviirivachon: Weird. When I specified a network for this tenant I specifically set --vlan=10013:27
vachonyou should look into missing configs13:28
kviiriI may have gotten an idea13:28
*** natea has joined #openstack13:28
kviiriYeah, I got it alright... too bad it didn't work13:29
*** edygarcia_ has joined #openstack13:29
*** ejat has joined #openstack13:29
kviiriI don't think there are any missing configs, at least I've done my best to follow the docs13:29
kviiriGranted, after spending a few months with OpenStack I know that doesn't guarantee anything :P13:30
*** cloudvirt has joined #openstack13:30
*** ayoung has joined #openstack13:30
vachonyea, considering im still stuck on changing novalocal to something else13:30
vachonugh13:30
*** edygarcia has quit IRC13:31
*** edygarcia_ is now known as edygarcia13:31
*** GiBa has joined #openstack13:32
*** GiBa has left #openstack13:32
*** zhuadl has joined #openstack13:32
lipinski1Any able to provide info on how nova-network starts dnsmasq?  It was starting it, but now it no longer does.  Not sure what changed to prevent it, and I don't see any indication in the logs.13:32
*** msinhore has quit IRC13:33
*** markmcclain has quit IRC13:33
*** ejat has quit IRC13:33
*** never2far has quit IRC13:34
*** never2far has joined #openstack13:34
vachonoh well thats fun https://answers.launchpad.net/nova/+question/20513613:34
vachonat least i know it worked13:34
*** sacharya has joined #openstack13:36
vachonand... no it didnt13:37
vachon*head -> desk*13:37
*** sean1 has left #openstack13:37
*** ejat has joined #openstack13:37
*** Breaking_Afk is now known as Breaking_Pitt13:37
*** _et has quit IRC13:37
*** msinhore has joined #openstack13:37
*** ServerTechLaptop has quit IRC13:39
*** ServerTechLaptop has joined #openstack13:39
lorin1lipinski1: Restarting nova-network doesn't start up dnsmasq?13:39
lipinski1lorin1: no.  That's my problem.  It used to.  I was troubleshooting a problem where dnsmasq wouldn't respond to DHCP requests.  Now nova-network won't start13:40
cooljlipinski1: are you sure dnsmasq is not running? restarting nova-network only sends a HUP to dnsmasq I think, which isn't very useful. not around a box to play with atm, but I wonder if there are stale locks in /var/lock/nova (maybe /var/lock/dnsmasq) preventing dnsmasq from starting. /throwing random stuff out13:40
lipinski1We've rebuilt the database, triple-checked everything in nova.conf and dnsmasq.conf - no lick13:40
lorin1lipinski1: Does nova-network start properly, is it just dnsmasq that isn't starting?13:40
lipinski1yes - novanetwork starts properly.  nova-manage service list shows nova-network "happy"13:41
lipinski1nothing in /var/lok for dns*13:41
*** msinhore has quit IRC13:42
lipinski1someone mentioned that you need to start an instance to get nova-network to start dnsmasq.  But, now we're having problems starting instances - they get stuck in BUILD for a while and eventually RPC timeout13:43
*** ZtF has joined #openstack13:43
*** rmartinelli has quit IRC13:44
*** mattray has joined #openstack13:44
*** Blackavar has quit IRC13:45
*** dwcramer has joined #openstack13:45
*** dendro-afk is now known as dendrobates13:46
*** natea has quit IRC13:47
*** imsplitbit has quit IRC13:47
cooljdoes rabbitmqctl list_consumers on the rabbit node show the host that the instance was scheduled to in the list as compute?13:50
lipinski1coolj: If you're talking to me, I don't think we are using rabbitmq13:51
lipinski1using qpidd13:52
*** zz_jathanism is now known as jathanism13:53
lorin1lipinski1: Any problems in nova-compute.log?13:54
*** dpkshetty has joined #openstack13:54
*** dpkshetty has quit IRC13:55
*** sandywalsh has joined #openstack13:55
*** shaon has quit IRC13:55
*** deepakcs has quit IRC13:56
lorin1lipinski1: At this point, I'd probably start adding logging statements to the code. In nova/network/manager.py, anytime you see self.driver.update_dhcp(…), that should eventually lead to calling  nova/network/linux_net.py:restart_dhcp(), which is the function that starts up dnsmasq.13:56
melmothhmmm. Guys, i am worried: http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/ mention that use of vlan is " is not supported by FlatDHCPManager and FlatManager")13:58
*** phschwartz-rm has joined #openstack13:58
lipinski1lorin1: 2012-08-07 08:56:35 TRACE nova.rpc.amqp Timeout: Timeout while waiting on RPC response.13:58
melmothdoes this mean that, as soon as you need some packeyt being flagged with a vlan tag, you have to use vlan network manager mode ?13:59
*** deepakcs has joined #openstack13:59
lipinski1lorin1: when starting an instance, I get RPC timeout in network.log on controller, and RPC timeout in compute.log on compute node.13:59
lipinski12012-08-07 08:56:35 TRACE nova.rpc.impl_qpid   File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 651, in next_receiver14:00
lipinski12012-08-07 08:56:35 TRACE nova.rpc.impl_qpid     raise Empty14:00
lipinski12012-08-07 08:56:35 TRACE nova.rpc.impl_qpid Empty: None14:00
lorin1lipinski1: Sounds like either there's an issue with qpid, or one of the nova-* processes is blocking which causes everything to fall apart.14:00
lorin1lipinski1: Does the RPC stuff work at first? Do you see the request go from nova-api to nova-scheduler to nova-compute?14:00
*** jkordish has joined #openstack14:02
*** KarinLevenstein1 has joined #openstack14:02
*** edygarcia_ has joined #openstack14:02
*** edygarcia has quit IRC14:02
*** edygarcia_ is now known as edygarcia14:02
*** ecarlin has joined #openstack14:02
*** KarinLevenstein has quit IRC14:03
*** andreastt has joined #openstack14:03
andreasttIs it possible to terminate an instance through the nova command-line interface?14:03
lipinski1lorin1: not sure how I tell.  I'm clearing out the logs and spinning up a new instance.14:03
*** iNdefiNite has quit IRC14:03
lipinski1I see this in api.log:14:04
lipinski1Generated ERROR from vm_state=error task_state=networking.14:04
cooljandreastt: nova delete <ID>14:04
andreasttcoolj: Ah.  Thank you! (-:14:05
lipinski1then the RPC Timeout in network.log and compute.log14:05
*** defect has quit IRC14:05
*** rmartinelli has joined #openstack14:05
*** defect has joined #openstack14:05
*** edygarcia has quit IRC14:05
lipinski1we've rebooted the controller node a few times, and rebuilt the db.14:05
cooljandreastt: Just to be clear, "nova delete" will terminally destroy an instance--you can't get it back. Did you mean this, or did you mean "shutdown the os"?14:07
andreasttcoolj: Could I also ask you what “shutdown, re-image and re-boot” means?  Specifically the “re-image” part?14:07
andreasttcoolj: I did mean that.14:07
andreasttI seem to have a terminology problem (-:14:07
lorin1lipinski1: What are  the last few lines in the nova-network.log before it times out?14:08
*** KarinLevenstein1 has left #openstack14:08
lipinski1Attempts to get semaphore for get_dhcp.  Then RPC Timeout.  I'll paste to pastebin14:08
*** KarinLevenstein1 has joined #openstack14:09
*** markmcclain has joined #openstack14:09
*** mattux_ is now known as mattux14:09
lipinski1lorin1: http://pastie.org/440600314:10
*** Glace_ has quit IRC14:11
*** Blackavar has joined #openstack14:11
*** davepigott_ has joined #openstack14:12
*** davepigott has quit IRC14:12
*** davepigott_ is now known as davepigott14:12
*** whenry has joined #openstack14:12
lorin1lipinski1: My guess would be that nova-network gets blocked somewhere after it gets the _get_dhcp semaphore, and then a periodic task launches that causes a deadlock. I can't tell from the log where nova-network is blocking, though.14:13
lipinski1lorin1: On another system of ours, when I restart nova-network, I see it do the iptables stuff, then immediately the dhcp stuff.  However, on this system, when I Restart nova-network, it does iptables stuff only.  Thent he dhcp stuff comes in later as a periodic task.14:14
lipinski1Not sure why nova-network refuses to do the dhcp stuff.  But, again, I don't know how it starts dnsmasq...14:15
*** pergaminho has quit IRC14:15
lorin1lipinski1: I think it's getting stuck before it gets to the start dnsmasq code. On your other system, what's the line immediately after "Got semaphore "get_dhcp" for method "_get_dhcp_ip"" in nova-network.log?14:15
*** amotoki has left #openstack14:16
*** Brian___ has joined #openstack14:17
*** Brian___ has quit IRC14:18
lipinski1lorin1: http://pastie.org/440604014:18
*** ServerTechLaptop has quit IRC14:18
*** lloydde has joined #openstack14:20
*** matiu has joined #openstack14:20
*** jakkudanieru has joined #openstack14:20
UICTamalein a multi-node nova-compute install, do you generally have to run nova-network / dnsmasq on every node, or just the controller?14:20
*** antenagora has quit IRC14:22
*** rnorwood has joined #openstack14:22
*** jakkudan_ has quit IRC14:23
melmothUICTamale, people tend to like the multi host mode (nova-network on each compute nodes)14:24
melmothbecuase, as each compute node act as a gateway for the vm it host, there are less chances of having all public traffic down simply because the single nova-netowkr node running may be down14:24
*** matiu has quit IRC14:24
UICTamalemelmoth: Any particular reason?  I'm asking because I can't reach my new VMs now that I have a second nova-compute node14:24
*** Glace_ has joined #openstack14:24
UICTamalemelmoth: Makes sense14:25
*** maoy has joined #openstack14:25
UICTamaledo I need to install dnsmasq or will installing nova-network take care of that?14:25
*** MarcMorata has joined #openstack14:27
*** vogxn has joined #openstack14:28
*** allsystemsarego has joined #openstack14:28
melmothi think you do not need to install it manually14:28
melmothbecause i do not remember installing it manually, and it "just worked"14:29
UICTamalewhen I do ps -ef | grep nova on my controller, I see dnsmasq running14:29
UICTamalebut on my second node, the same command shows nova-network but no dnsmasq14:29
UICTamaleI think that's why it's not working, but I'm not sure what starts dnsmasq14:30
lipinski1lorin1: put some LOG.debugs in setup_network_on_host methods and it's not getting called.14:31
*** jakkudanieru has quit IRC14:32
lorin1lipinski1: Based on that log, it definitely looks like nova-network is getting stuck somewhere. I think you'll have to sprinkle in more LOG statements until you can identify which method it's getting blocked in.14:33
lorin1lipinski1: clearly it's somewhere before ensure_bridge, since it never even tries to get that sempahore.14:33
lipinski1lorin1: ok, thanks.  I'm not familiar with python at all, so this may take a while.14:33
lorin1lipinski1: Good luck!14:34
*** zehicle has quit IRC14:34
*** cp16net is now known as cp16net|away14:34
*** zehicle has joined #openstack14:34
*** cp16net|away is now known as cp16net14:34
lipinski1lorin1: thanks for the help.  I'll post something later if I find the culprit.  Gotta run.14:34
*** lipinski1 has left #openstack14:34
*** supriya has quit IRC14:34
*** andreastt has left #openstack14:34
*** llang629 has quit IRC14:35
*** mattray has quit IRC14:39
*** datsun180b has joined #openstack14:39
*** philips_ has quit IRC14:40
*** philips_ has joined #openstack14:41
*** matwood has joined #openstack14:41
*** zhuadl has quit IRC14:42
*** MarkAtwood has joined #openstack14:42
p3N74d4v1Dmelmoth alive?14:42
melmothand kicking.14:42
halfssdose keystone+swift can make some obejct public(access with out auth)?14:43
melmothp3N74d4v1D, how is your install going then ?14:43
*** troytoman is now known as troytoman-away14:45
*** hunglin has joined #openstack14:47
*** never2far has quit IRC14:49
*** ondergetekende has quit IRC14:49
p3N74d4v1Dmelmoth: still having networking issue14:50
p3N74d4v1DVMs can speak only to their host node14:50
phschwartz-rmAfter a reboot I am seeing the following in my apache logs when trying to access the dashboard. http://paste2.org/p/2097393 Any ideas?14:50
melmothp3N74d4v1D, do the compute nodes show some nat rules ? (they should)14:51
melmothand do the outgoing packet thus nated have a correct source adress set ? (ip must be the one of the nova-network node)14:51
*** davepigott has quit IRC14:51
p3N74d4v1Dhttp://pastebin.com/5YxEhR3V14:52
p3N74d4v1DI am running multihost14:52
*** rkukura has joined #openstack14:53
*** BlackMaria has joined #openstack14:54
melmothp3N74d4v1D, 192.168.178.82 is the ip of the current compute node ?14:54
p3N74d4v1Dyes14:55
*** Gordonz has joined #openstack14:55
melmothwith tcpdump can you confirm outgoint packet from a vm have this adress set for source ? (should do because of ova-network-float-snat , but one never knows)14:55
melmothto be honest, i do not know what could be wrong there.14:56
p3N74d4v1Dme too14:57
p3N74d4v1DI have internet access from VMs14:58
melmothconference call time.... will be back in an hour or so14:58
*** natea has joined #openstack14:59
*** rnorwood1 has joined #openstack14:59
*** ejat has quit IRC14:59
p3N74d4v1Daight see you melmoth15:00
*** _et has joined #openstack15:00
*** pvankouteren has quit IRC15:00
*** rnorwood has quit IRC15:00
phschwartz-rmThe dashboard error was a little more detailed this time. http://paste2.org/p/209739915:01
phschwartz-rmAny thoughts?15:01
*** sdake has quit IRC15:02
*** azret has joined #openstack15:02
*** sdake has joined #openstack15:02
*** KarinLevenstein1 has quit IRC15:02
*** imsplitbit has joined #openstack15:02
*** rnorwood has joined #openstack15:02
*** KarinLevenstein has joined #openstack15:03
*** reidrac has quit IRC15:03
*** iNdefiNite has joined #openstack15:04
*** bencherian has joined #openstack15:05
*** deepakcs has quit IRC15:05
*** melmoth has quit IRC15:06
koolhead17ttx, ping15:08
*** mattray has joined #openstack15:08
ttxkoolhead17: pong15:08
*** lloydde has quit IRC15:09
*** epim has joined #openstack15:10
*** freeflyi1g has joined #openstack15:11
uvirtbotNew bug: #1031311 in nova "CVE-2012-3361 not fully addressed" [Critical,Confirmed] https://launchpad.net/bugs/103131115:11
uvirtbotNew bug: #1034021 in openstack-manuals "Missing figure, trailing off and incorrect use of its it's in Object Storage Admin guide" [High,Triaged] https://launchpad.net/bugs/103402115:11
phschwartz-rmAnyone have an idea about that import error?15:11
*** melmoth has joined #openstack15:12
*** cp16net is now known as cp16net|away15:12
*** melmoth has quit IRC15:13
*** melmoth has joined #openstack15:13
*** freeflying has quit IRC15:13
*** halfss_ has joined #openstack15:17
*** kpavel has quit IRC15:17
*** littleidea has joined #openstack15:18
*** guigui3 has quit IRC15:18
*** cp16net|away is now known as cp16net15:19
*** nelson1234 has quit IRC15:19
*** KarinLevenstein has quit IRC15:19
*** markvoelker has quit IRC15:19
*** Glace_ has quit IRC15:19
*** lorin1 has quit IRC15:20
*** halfss has quit IRC15:20
*** Glace has joined #openstack15:20
*** rnirmal has joined #openstack15:21
*** natea has quit IRC15:21
*** nelson1234 has joined #openstack15:22
*** nelson1234 has quit IRC15:22
*** markvoelker has joined #openstack15:23
*** KarinLevenstein has joined #openstack15:26
*** cloudvirt has quit IRC15:26
*** jkordish has quit IRC15:27
*** dachary has quit IRC15:28
*** datsun180b has quit IRC15:29
*** markmcclain has quit IRC15:29
*** dev_sa has left #openstack15:29
*** datsun180b has joined #openstack15:30
*** kpavel has joined #openstack15:30
*** supriya has joined #openstack15:31
*** rnorwood has quit IRC15:32
*** shang_ has quit IRC15:34
*** rnorwood has joined #openstack15:34
*** datsun180b has quit IRC15:34
*** dubsquared has joined #openstack15:35
*** hggdh has quit IRC15:35
*** llang629 has joined #openstack15:36
*** lorin1 has joined #openstack15:36
*** rnorwood1 has joined #openstack15:37
*** maploin has quit IRC15:37
*** dolphm has quit IRC15:38
*** ecarlin has quit IRC15:38
*** rnorwood has quit IRC15:39
*** hggdh has joined #openstack15:39
*** ntt_dev has joined #openstack15:40
*** alanmac has joined #openstack15:40
ntt_devhi, can I ask one thing about swing?15:41
*** troytoman-away is now known as troytoman15:41
*** natea has joined #openstack15:42
notmynamentt_dev: ask away15:43
ntt_devthanks, notmyname15:43
*** dachary has joined #openstack15:43
*** led_belly has joined #openstack15:44
alex88hi guys, i'm running a vm using kvm, it's the only vm running on a i7 16gb ram host, but it's taking about 5 mins to upgrade packets, vm load is about 4, host load is like 9, but cpu has 5% load each.. what else can it be?15:45
*** heckj has joined #openstack15:45
ntt_devi have a tenant and many users associated with it. It is normal that all users see all folders?15:45
*** kpavel has quit IRC15:45
ntt_devIf user A creates a folder, user B can see this folder?15:46
alex88off, they're in the same tenant15:46
alex88*ofc15:46
ntt_devand what is the best way to avoid this? create more tenants?15:46
gtirlonialex88: what's top showing in that VM? too much wait% ?15:47
alex88gtirloni, around 50%15:48
*** kpavel has joined #openstack15:48
Glacentt_dev: are you using keystone?15:48
ntt_devyes, Glace15:48
alex88gtirloni, can it be due that? it took 1 min to unpack libc6-dev15:50
ntt_devconceptually, users should belong to the same tenant but have private folders. Is it possible?15:51
GlaceI do not know keystone sorry. But I know that you have at least permission on tenants so that you can give specific access to other tenants. I am not sure if Keystone has fine grained permission for users within tenants.15:51
gtirlonialex88: i'm assuming you're playing with cpu capping so... disk too busy perhaps?15:52
*** shang has joined #openstack15:52
gtirlonialex88: are you hosting the vm image over the network or something?15:52
*** whenry has quit IRC15:52
*** MarcMorata has quit IRC15:52
*** MarcMorata has joined #openstack15:53
GlaceThat would be something to check with a keystone person or maybe notmyname knows15:53
ntt_devThanks Glace.  alex88: do you know some solution?15:53
alex88gtirloni, mmhh, iotop on host shows some processes using sometimes some kB/s, nothing so heavy, IO goes at 99% sometimes with drbd, but no file read/write by it15:54
alex88gtirloni, also no, they're on local fs15:54
*** supriya has quit IRC15:55
alex88ntt_dev, nope, haven't played with swift :(15:55
alex88gtirloni, also, a dd test on vm shows 50MB/s15:55
*** Triade has quit IRC15:55
*** kindaopsdevy has joined #openstack15:55
*** whenry has joined #openstack15:55
notmynamentt_dev: I know swift, and swift supports that functionality, but what you are asking about has to do with keystone's functionality. I don't know how fine-grained keystone's permissions are yet15:56
*** colinmcnamara has joined #openstack15:56
uvirtbotNew bug: #1034032 in openstack-ci "make static html versions of jenkins reports for archiving" [Critical,Triaged] https://launchpad.net/bugs/103403215:56
ntt_devnotmyname, I can change my authentication system, but keystone seems to be the best. What do you recommend?15:57
gtirlonialex88: it looks good.. tring to think of something else.15:57
*** shang has quit IRC15:59
alex88gtirloni, me too, also because i've now tried to run 2 burnP6 processes, using 100% of both cpus but still 0% wait time15:59
notmynamentt_dev: I think the best auth system to use depends on the other components you are using. if you are using other openstack components (ie nova, glance, etc), use keystone. if you are using just swift and have a small number of users, use tempauth. if you are using just swift and have a large number of users, use swauth. if you are using cloudstack, use cs_auth, there's lots of options :-)15:59
*** tdowg1 has quit IRC15:59
*** kindaopsdevy_ has joined #openstack16:00
*** milner_ has joined #openstack16:01
ntt_devnotmyname, you're right. Unfortunately I use all components in openstack and I have a lot of swift users16:01
alex88gtirloni, btw, when running dd i get 80% wait, but 35-40 MB/S doesn't seems a bad value16:01
*** joebaker has joined #openstack16:02
notmynamentt_dev: then you should probably use keystone. you have to choose between a more full featured (?) auth system like swauth and the better integrated across projects like keystone16:02
*** dubsquared1 has joined #openstack16:03
*** kindaopsdevy has quit IRC16:03
*** kindaopsdevy_ is now known as kindaopsdevy16:03
koolhead17dubsquared, hola16:03
*** milner has quit IRC16:03
gtirlonialex88: i'd try iostat -xn on the host and see what's going on with the rsec/wsec times. does the host exhibit that many wait% outside the vm?16:03
gtirloni(btw, would also try iostat inside the vm)16:04
*** tomoe_ has quit IRC16:04
*** dubsquared has quit IRC16:04
ntt_devnotmyname, thanks. Maybe I will try swauth16:05
*** maoy has quit IRC16:07
*** e1mer has quit IRC16:07
*** kashyap has quit IRC16:07
UICTamaleis it necessary to use two physical networking interfaces on the host nodes in order to use flatdhcp, or is a single connection sufficient?16:08
alex88gtirloni, i don't see any rsec/wsec16:09
alex88also -n is not a flag, i've used -N16:09
alex88it can be r_await w_await?16:09
*** kpavel has quit IRC16:10
*** kpavel has joined #openstack16:10
*** natea has quit IRC16:10
*** llang629 has quit IRC16:10
*** llang629 has joined #openstack16:11
uvirtbotNew bug: #1034040 in nova "error injecting data into image: 'dict' object has no attribute 'key')" [High,In progress] https://launchpad.net/bugs/103404016:11
*** edygarcia has joined #openstack16:11
*** mnewby has joined #openstack16:11
*** mnewby has quit IRC16:12
*** mnewby has joined #openstack16:12
*** tdowg1 has joined #openstack16:13
gtirlonialex88: yea, await.. i'm on rhel6 here so iostat might be different16:13
*** s0mik has joined #openstack16:13
*** markmcclain has joined #openstack16:14
*** milner_ has quit IRC16:14
alex88gtirloni, await is 708, 24 on server16:14
*** cloudvirt has joined #openstack16:15
*** lloydde has joined #openstack16:16
*** ondergetekende has joined #openstack16:16
*** whenry has quit IRC16:18
*** itz_ has quit IRC16:18
*** thovden has joined #openstack16:19
*** erikzaadi has quit IRC16:20
uvirtbotNew bug: #1034043 in nova "nova/virt/disk/api.py imports crypt, doesn't work on Windows " [Undecided,New] https://launchpad.net/bugs/103404316:20
*** jedi4ever has joined #openstack16:21
*** ntt_dev has quit IRC16:22
*** ecarlin has joined #openstack16:22
*** p3N74d4v1D has quit IRC16:22
*** kindaopsdevy has quit IRC16:22
*** kindaopsdevy has joined #openstack16:22
alex88gtirloni, it's not disk, i'm still installing a package with apt-get, and i see from iotop that it takes some ms to write down (dpkg process) and for the other time it stays idle16:22
alex88gtirloni, but still 50% wait16:24
*** ejat has joined #openstack16:24
*** ejat has joined #openstack16:24
alex88btw, going home now, worked enough, thank you gtirloni.. i'll check this again tomorrow16:24
*** alex88 has left #openstack16:24
*** alex88 has quit IRC16:24
*** primozf has quit IRC16:25
*** albert23 has joined #openstack16:26
*** ondergetekende has quit IRC16:28
*** supriya has joined #openstack16:28
*** rnorwood1 has quit IRC16:29
*** warik has joined #openstack16:29
*** edygarcia_ has joined #openstack16:29
*** heckj has quit IRC16:29
*** heckj has joined #openstack16:30
*** milner_ has joined #openstack16:31
*** edygarcia has quit IRC16:32
*** edygarcia_ is now known as edygarcia16:32
*** Breaking_Pitt is now known as Breaking_Out16:32
*** rnorwood has joined #openstack16:32
*** comptona has quit IRC16:33
*** ericcc has joined #openstack16:34
*** imsplitbit has quit IRC16:34
*** ecarlin has quit IRC16:34
*** ecarlin has joined #openstack16:34
*** stef_ has joined #openstack16:35
*** markmcclain has quit IRC16:36
*** dubsquared has joined #openstack16:36
*** dubsquared1 has quit IRC16:36
*** bencherian has quit IRC16:38
*** zodiak has quit IRC16:39
*** rnorwood has quit IRC16:40
*** GiBa has joined #openstack16:41
GiBahello16:41
GiBai installed openstack16:41
GiBacreated tree volumes16:41
GiBaand some machines16:41
GiBathen i attached them16:41
GiBathen i halted the server16:41
GiBaand boot it up again (to check that everything is ok)16:42
GiBabut16:42
*** gyee has joined #openstack16:42
GiBanone of the volumes got attached again16:42
*** rnorwood has joined #openstack16:42
GiBathe status still being attached but the machine dosnt see it16:42
GiBaso i try to de-attach16:42
*** halfss_ has quit IRC16:42
GiBaand get this error on the log16:42
GiBaiscsiadm: no records found!16:43
GiBahelp16:43
GiBa:)16:43
*** markmcclain has joined #openstack16:43
*** ejat has quit IRC16:43
*** dwcramer has quit IRC16:43
GiBa2012-08-07 09:31:49 TRACE nova.rpc.amqp ProcessExecutionError: Unexpected error while running command.16:43
GiBa2012-08-07 09:31:49 TRACE nova.rpc.amqp Command: sudo nova-rootwrap iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p 127.0.0.1:3260 --op update -n node.startup -v manual16:43
GiBa2012-08-07 09:31:49 TRACE nova.rpc.amqp Exit code: 25516:43
GiBa2012-08-07 09:31:49 TRACE nova.rpc.amqp Stdout: ''16:43
GiBa2012-08-07 09:31:49 TRACE nova.rpc.amqp Stderr: 'iscsiadm: no records found!\n'16:43
GiBaroot@heaven:/etc/init.d# iscsiadm -m discovery16:43
GiBaroot@heaven:/etc/init.d#16:44
*** arBmind has quit IRC16:44
*** markmcclain has quit IRC16:44
*** natea has joined #openstack16:44
*** markmcclain has joined #openstack16:45
*** koolhead17 has quit IRC16:46
*** markmcclain has quit IRC16:46
*** markmcclain has joined #openstack16:47
GiBa:(16:47
GiBano one?16:47
*** dev_sa has joined #openstack16:50
*** natea has quit IRC16:50
*** infernix has joined #openstack16:50
*** infernix has joined #openstack16:50
*** adjohn has joined #openstack16:52
*** edygarcia has quit IRC16:53
UICTamaleI'm afraid I'm not working volumes yet16:53
UICTamalestill can't reach my VMs :(16:53
*** bencherian has joined #openstack16:54
GiBa:(16:55
*** derekh has quit IRC16:56
_sante_Hi all, I'm trying to connect a swift cluster using keystone. I'm using Ubuntu 12.04's packages. While evertything seems fine on keystone's side I get this error on switf proxy-server's side: " proxy-server STDOUT: No handlers could be found for logger "keystone.middleware.auth_token". Any idea about the reason?  Thnx16:58
*** maoy has joined #openstack16:59
*** Gordonz has quit IRC17:00
*** kpavel has quit IRC17:00
*** Gordonz has joined #openstack17:00
*** MarcMorata has quit IRC17:00
*** maoy has quit IRC17:00
*** kpavel has joined #openstack17:02
*** jplewi has joined #openstack17:02
*** epim has quit IRC17:04
*** desai has quit IRC17:05
*** epim has joined #openstack17:05
*** garyk has quit IRC17:06
UICTamaleIs this important?  DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: socket timeout [timed out]17:07
*** llang629 has quit IRC17:09
*** llang629 has joined #openstack17:09
phschwartz-rmAnyone have an idea what would cause this error when you try to access the dashboard. http://paste2.org/p/2097399 (It displayes Internal Server Error)17:10
*** sacharya has quit IRC17:13
*** jedi4ever has quit IRC17:13
GiBaso many questions and no answers17:14
GiBa:(17:14
GiBais there a mailing list?17:14
*** kmwhite has joined #openstack17:14
UICTamaleI think this chatroom sees a lot of the same questions.. we need a stackexchange for openstack17:14
*** johnpur has joined #openstack17:16
*** ChanServ sets mode: +v johnpur17:16
*** edygarcia has joined #openstack17:16
*** kyriakos has quit IRC17:17
*** sstent has quit IRC17:19
*** supriya has quit IRC17:19
*** sstent has joined #openstack17:19
*** Breaking_Out has quit IRC17:20
epimGiBa: yeah, check launchpad.net17:20
GiBaim just on that17:20
*** issackel_ has joined #openstack17:20
GiBatnx epim17:20
epimUICTamale: Is that from your VM? I'm guessing it means your VM isn't able to hit the metadata server.17:21
*** bencherian has left #openstack17:21
*** lipinski1 has joined #openstack17:22
UICTamaleepim: I guessed the same - but I don't know why not.17:23
UICTamaleepim: I googled it and found this, https://answers.launchpad.net/nova/+question/18902617:23
*** phschwartz-rm has quit IRC17:23
*** phschwartz-rm has joined #openstack17:23
UICTamaletried all those suggestions to no avail17:23
lipinski1lorin1: I'm back :)..   I'm finding that the db.network_get_all_by_host method must be returning no networks.17:23
lipinski1lorin1: This is in regards to dnsmasq not starting17:23
UICTamaleIs 169.254.169.254 special?  How is that supposed to route to my controller node?17:24
phschwartz-rm169.254.169.254 is the route to the metadata server.17:24
epimI don't use the metadata stuff, but i'll guess it's one of a couple things. 1: The firewall rules aren't being configured correctly on your hypervisor. 2: Are you using Openvswitch?17:24
*** dendrobates is now known as dendro-afk17:24
epimUICTamale: there are some special DNAT and SNAT firewall rules on the hypervisor, they're (supposed) to trap all outbound traffic for 169.254.169.254 and reroute them to localhost17:25
UICTamaleepim - I think it's related to my nova.conf - my first VM didn't have this problem17:25
UICTamaleand it was created on the same host as the controller - this instance is being spun up on a separate nova-compute node17:26
lorin1lorin1: Hmm. Did you try creating a network first with nova-manage?17:26
epimDid you change your firewall driver?17:26
UICTamalenot that I know of17:27
lipinski1yes - I have a network defined (10.10.1.0/24).  The IP 10.10.1.1 is assigned on the bridge.  hoaver, nova-manage fixed list does not list any hosts associated with any of the IPs.17:27
*** KarinLevenstein has quit IRC17:27
*** KarinLevenstein has joined #openstack17:27
*** darraghb has quit IRC17:28
epimNot sure past that, sorry :\ We're using Openvswitch and we've given up on getting metadata working for now17:29
*** cp16net is now known as cp16net|away17:29
UICTamaleepim: I'm going to try this fellow's suggestion - http://forums.openstack.org/viewtopic.php?f=9&t=576 - stopping nova-network on my second node17:30
*** natea has joined #openstack17:31
*** anniec has joined #openstack17:31
epimOh, are you using bridged networking, nat, or bridged with vlan tagging?17:31
UICTamaleflat dhcp17:32
UICTamalebridged17:32
UICTamaleI have two physical interfaces on each node17:32
UICTamaletrying to follow:  http://unchainyourbrain.com/openstack/13-networking-in-nova17:32
*** ecarlin has quit IRC17:33
*** natea_ has joined #openstack17:33
*** dev_sa has quit IRC17:34
DiopterI'm curious why you're not following the actual official docs/guides, since those images and text are seemingly taken verbatim from them17:34
DiopterBut is less likely to be as up-to-date17:34
epimhrmm, i'm going to guess that maybe the 169.254 traffic is getting routed through a physical interface rather than being terminated locally.17:34
*** natea has quit IRC17:35
*** natea_ is now known as natea17:35
UICTamaleDiopter: The official docs/guides offer much in the way of overviews, but very little in specific implementation17:35
UICTamaleFrom what I've seen so far, anyway17:35
*** ecarlin has joined #openstack17:36
GiBaim stcuck on this17:36
GiBaits useless17:36
lipinski1lorin1: nova-network just started starting dnsmasq.  Very strange.  10.10.1.1/24 was on br1 interface.  I added 10.10.1.1/8, and restarted nova-network, and it started working (starting dnsmasq).  I have removed 10.10.1.1/8 and restarted again, and it is continuing to work.  That is crazy finicky.17:36
DiopterPerhaps an IP isn't getting assigned to the VM bridge, which would prevent the traffic from hitting the routing engine in the host, which would therefore not enter the PREROUTING chain in iptables NAT table and not get DNAT'd17:36
GiBais the third time i install everting and always fails on the same17:36
Diopter^ directed at epim's comment17:36
uvirtbotDiopter: Error: "directed" is not a valid command.17:36
*** tuf8 has joined #openstack17:36
epimDiopter: that makes sense17:37
UICTamaleDiopter: That makes sense - how do I test the theory?17:37
UICTamaleI'll pastebin the instance log17:37
GiBait keeps failing with 'iscsiadm: no records found17:37
DiopterUICTamale: Well, first steps, check "ip a" and see what's on the bridge. Maybe "brctl show" to make sure the right ifs are in the bridge17:37
UICTamalehttp://pastebin.com/0tjgrwW517:38
UICTamaleon the node, not the VM?17:38
*** sacharya has joined #openstack17:38
DiopterRight.17:38
*** k0rupted has joined #openstack17:38
UICTamaleI see 169.254.169.254 on the loopback interface17:39
epimUICTamale: can you also pastebin your iptables rules?17:39
*** natea has quit IRC17:39
DiopterThe expected setup is that 169.254.169.254 is on the loopback if (lo), the VM vif is in the VM bridge (usually vnet# in br###), and the bridge has an IP which is in the VM network (Nova fixed), which dnsmasq will be handing out IPs in (for FlatDHCP)17:40
UICTamaleepim: On the host node, it's whatever the defaults are for ubuntu 12.04 - On the VM, it's whatever the defaults are for ubuntu 11.10 - I haven't touched them.17:40
Diopternova-network orchestrates those pieces17:40
DiopterUICTamale: What's on the bridge?17:40
*** msinhore has joined #openstack17:40
UICTamaleI'll pastebin it17:40
*** alanmac has quit IRC17:40
DiopterCool. "ip a; ip r; brctl show; iptables-save" would be awesome, if you want to be thorough17:41
*** dolphm has joined #openstack17:41
UICTamalesure17:41
*** jdurgin has joined #openstack17:42
*** samkottler is now known as samkottler|food17:42
UICTamalePMd17:42
Diopterok17:43
*** natea has joined #openstack17:43
DiopterActually looks just fine. Hrm. Let me spin up a controller real quick and double check a thing or two17:44
*** bsza has left #openstack17:45
DiopterAlso, for reference, you can "virsh net-destroy default; virsh net-autostart default --disable", to kill all the virbr0 and other goofy rules in there17:45
DiopterThe 'default' libvirt network has nothing to do with Nova, and in theory shouldn't conflict, but...17:45
*** ZtF has quit IRC17:45
UICTamalegotcha, I was wondering about those.17:46
DiopterMight be worth destroying and disabling it and trying again17:46
UICTamalesure17:46
UICTamaletrying now17:46
*** KarinLevenstein has quit IRC17:46
*** KarinLevenstein has joined #openstack17:47
*** dwcramer has joined #openstack17:47
UICTamaleno luck17:47
UICTamalewhat about the nova.conf option of '--nova_network=xxx'17:47
lipinski1ok - back to my old problem.  In network.log when creating an instance:17:47
lipinski1Timed out waiting for RPC response: None17:47
UICTamaleis that supposed to be there?17:47
UICTamaleaha - this is probably telling - I can ping the new VM via its private IP, but I can't ssh to it17:49
DiopterUICTamale: "supposed to be" is strong language. It's likely optional with a default. Also... are you running Essex here? Because your nova.conf flag style looks like Diablo17:49
UICTamalebut I know my security groups are valid because I can ping AND ssh into my original VM17:49
DiopterAre you sure the new VM is using the same security group?17:49
GiBaa quick one17:50
DiopterAlso, try using VNC to get on it17:50
UICTamaleI haven't made any others, and I only made changes to the default one17:50
GiBaflat network17:50
GiBa192.168.4.*17:50
UICTamaleEssex17:50
GiBabut eth1 on both machines 192.168.3.1 and 192.168.3.217:50
UICTamalethe essex docs / guides have the same style - what should it be?  no -- ?17:50
GiBa.3.x network17:50
GiBais that ok?17:50
DiopterUICTamale: Do you have "fixed_network=x.x.x.x/yy" in your nova.conf?17:50
DiopterGiBa: Depends. Is your network /23?17:51
UICTamalegrep fixed /etc/nova/nova.conf17:51
UICTamale--fixed_range=192.168.22.32/2717:51
GiBano /2417:51
DiopterEr, fixed_range, sorry.17:51
DiopterOk, so you do have it in there.17:51
GiBaim getting stressed17:51
GiBa:(17:51
*** iNdefiNite has quit IRC17:51
lorin1lipinski1: Don't know what to tell you...17:52
DiopterGiBa: You might have conflicting information in nova's database (nova-manage network list, or similar) vs. nova.conf, or host OS network config.17:52
*** matwood has quit IRC17:54
*** matwood has joined #openstack17:55
GiBathe original problem is cant dettach because 'iscsiadm: no records found!17:55
GiBanow i broke everything xD17:55
annegentleUICTamale: "The default nova.conf format for Essex packages varies by distribution. Fedora packages use the new INI style format, and Ubuntu 12.04 packages use the old flag format. " -- http://docs.openstack.org/essex/openstack-compute/install/apt/content/compute-minimum-configuration-settings.html17:55
*** comptona has joined #openstack17:55
*** dendro-afk is now known as dendrobates17:55
GiBaim thinking about that cracked version of vmware...17:56
GiBarolf17:56
*** livemoon has joined #openstack17:56
UICTamaleannegentle: Thanks17:56
*** comptona1 has joined #openstack17:57
*** GiBa has left #openstack17:58
*** rnorwood has quit IRC17:58
*** maoy has joined #openstack17:59
lipinski1how do you get an instance out of a stuck BUILD state?17:59
*** sandywalsh has quit IRC17:59
*** natea has quit IRC17:59
*** ev0ldave has joined #openstack17:59
*** desai has joined #openstack17:59
*** comptona has quit IRC18:00
*** natea has joined #openstack18:00
annegentlelaunch another, crumple it up, throw it away, (delete it from the nova database)?18:00
*** maoy has quit IRC18:00
annegentlesorry, meant to say lipinski1 ^^18:00
lipinski1annegentle: every instance I create gets stuck in BUILD.18:01
*** comptona1 is now known as comptona18:01
annegentlelipinski1: what's the image?18:01
lipinski1an Ubuntu iso18:01
annegentlecan you launch it through other means?18:01
lipinski1It's stuck in the networking tsak_State18:01
lipinski1I was able to launch it yesterday - before running into all these dnsmasq/nova-network issues18:02
lipinski1There's still something wrong with the networking.  nova-network is now launching dnsmasq, but stuck at this point.18:02
*** msinhore has quit IRC18:02
*** livemoon has left #openstack18:03
*** rnorwood has joined #openstack18:03
*** h0cin has quit IRC18:03
annegentlelipinski1: sometimes two dnsmasq processes are running, have you done killall dnsmasq?18:04
*** natea has quit IRC18:04
lipinski1annegentle: well, there are two dnsmasq running - the parent and child.  Ther eis no KVM dnsmasq running.18:05
annegentlelipinski1: and then this is the networking troubleshooting I know of http://docs.openstack.org/trunk/openstack-compute/admin/content/network-troubleshooting.html18:05
annegentlelipinski1: and then also check the dnsmasq configuration http://docs.openstack.org/trunk/openstack-compute/admin/content/dnsmasq.html18:06
Diopterlipinski1: Sounds like you have a default dnsmasq daemon running. You might want to disable that one.18:06
*** lcheng has joined #openstack18:07
*** UICTamale has quit IRC18:07
*** BlackMaria has quit IRC18:07
*** UICTamale has joined #openstack18:07
*** BlackMaria has joined #openstack18:08
*** Ryan_Lane has quit IRC18:08
lipinski1dnsmasq is correct.  the hosts file it is pointed to (by nova-network) is empty.18:08
lipinski1nova-network is not adding instances to the file when they're instantiated.18:09
lipinski1Is there a qay to test qpid connectivity?18:09
lipinski1Seems like a general comminication problem among my nodes.18:09
*** colinmcnamara has quit IRC18:09
*** iNdefiNite has joined #openstack18:10
ev0ldavelipinski1:  sorry, maybe i missed this, but can you paste in nova-compute.log18:10
*** sacharya has quit IRC18:12
lipinski1ev0ldave: let me clear the log and start a new one.18:12
ev0ldavelipinski1:  nova.conf as well18:13
lipinski1ev0ldave: http://pastie.org/440722418:15
lipinski1I pasted the nova.conf from the controller, compute.log from the compute node, and network.log from the controller node (nova-network).18:16
lipinski1RPC errors again.18:16
*** jog0 has joined #openstack18:18
*** johnpur has quit IRC18:20
*** nati_ueno has joined #openstack18:20
*** imsplitbit has joined #openstack18:20
*** sacharya has joined #openstack18:20
ev0ldavethanks18:20
*** MarcMorata has joined #openstack18:20
*** johnpur has joined #openstack18:21
*** ChanServ sets mode: +v johnpur18:21
*** msinhore has joined #openstack18:21
*** primeministerp has quit IRC18:22
*** primeministerp has joined #openstack18:22
*** qazwsx has quit IRC18:22
*** dev_sa has joined #openstack18:22
acidprimehi all18:24
*** jog0 has left #openstack18:25
acidprimeis there a good way to force a refresh in horizon's /syspanel/instances/18:25
*** alanmac has joined #openstack18:25
acidprimenova-manage floating list18:25
acidprimeshows floating ips alloc'ed18:25
acidprimebut they seem to slowly filter in to the ui18:26
*** BlackMaria1 has joined #openstack18:26
acidprimeif I terminate they all refresh18:26
acidprimetried "flush_all" with memcached18:26
acidprimerestarting apache etc. but no go18:27
acidprimeanyone have any tricks?18:27
*** anniec_ has joined #openstack18:27
*** anniec has quit IRC18:27
*** anniec_ is now known as anniec18:27
*** BlackMaria has quit IRC18:28
*** rnorwood has quit IRC18:28
*** Madkiss has quit IRC18:30
*** Madkiss has joined #openstack18:30
*** Madkiss has quit IRC18:30
*** Madkiss has joined #openstack18:30
*** rnorwood has joined #openstack18:31
*** fikus-kukis^TP has joined #openstack18:31
*** lorin1 has quit IRC18:33
*** jmh_ has joined #openstack18:34
*** samkottler|food is now known as samkottler18:34
*** rnorwood1 has joined #openstack18:34
*** rnorwood has quit IRC18:35
*** sandywalsh has joined #openstack18:36
*** ctracey has joined #openstack18:38
*** dsalinas has joined #openstack18:39
*** imsplitbit has quit IRC18:40
*** h0cin has joined #openstack18:41
*** h0cin has joined #openstack18:41
ev0ldavelipsinki1:  did you make changes to your nova.conf?  not sure how you got a subinterface to be bridged18:41
*** imsplitbit has joined #openstack18:41
*** cp16net|away is now known as cp16net18:42
*** cp16net is now known as cp16net|away18:42
*** cp16net|away is now known as cp16net18:42
lipinski1we defined the bridge manually.18:43
ijwCan someone tell me what (in Essex, with FlatDHCP) usually provides the 169.254.169.254 interface?  I've done something to break it, and I'm not quite sure what would normally be in place to make it happen.18:43
lipinski1create the vlan interface.  create the bridge, add the vlan interface to the bridge.18:43
lipinski1ev0ldave: It's a VLAN interface, not a sub-interface (alias)18:44
Diopterijw: Probably nova-api-metadata; the IP (169.254.169.254/32) goes on the loopback, and is accompanied by a couple iptables rules18:44
Diopteran INPUT (filter) ACCEPT, a PREROUTING (nat) DNAT, etc18:44
*** kindaopsdevy has quit IRC18:45
Diopterijw: Though it's probably nova-network itself that provides the iptables trappings18:45
ijwDiopter: hm, I expected to see something in the iptables rules but couldn't spot it.  Also, it's served from the compute node or the API node?18:45
*** maoy has joined #openstack18:45
*** lorin1 has joined #openstack18:46
ijwWhatever has the relevant nova-network, I'm guessing...18:46
Diopterijw: Depends on your networking approach. FlatDHCP with multi_host=true will run nova-api-metadata and nova-network on each compute node.18:46
ijwIt is indeed multi_host18:46
*** kindaopsdevy has joined #openstack18:46
*** sandywalsh_ has joined #openstack18:47
ijwOK then, a slightly different question: what process normally provides the HTTP server?18:47
*** sandywalsh has quit IRC18:47
*** markmcclain1 has joined #openstack18:48
ijwAh, that'd be what you said a moment ago.18:48
* Diopter nods18:49
*** markmcclain1 has quit IRC18:49
*** markmcclain1 has joined #openstack18:49
DiopterGotta do moar works nao. Good luck!18:49
*** garyk has joined #openstack18:49
UICTamaleHey all, Diopter gave me a big hand - I got to the point where the new VMs are now complaining with "connection refused" instead of "socket timeout":18:50
UICTamaleDataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused]18:50
*** markmcclain has quit IRC18:50
*** kindaopsdevy has quit IRC18:51
*** SplasPood has quit IRC18:52
*** markmcclain1 has quit IRC18:52
*** markmcclain has joined #openstack18:53
ijwUICTamale: since you're looking at *exactly* the thing I'm having a problem with, can you please tell me what's listening to 169.254.169.254:80 on your machine?18:54
*** thovden has quit IRC18:58
UICTamalesure18:59
UICTamaleapache is19:00
UICTamaleon my controller19:00
UICTamalenot on my compute node though19:00
*** warik has left #openstack19:02
*** reed has quit IRC19:02
*** reed has joined #openstack19:03
*** Ryan_Lane has joined #openstack19:03
ijwYes, which is also what's happening with me19:03
DiopterSo, ss -lntp shows nothing for port 80 on your compute nodes?19:04
ijwno19:04
ijwI assume nova-api-metadata should be installed and running there and I suspect it never got installed.19:05
ijwI presume it's just a nova-api without the actual api?19:05
uvirtbotNew bug: #1034117 in nova "xenapi: race condition between creating instance and fetching console URL working" [Undecided,New] https://launchpad.net/bugs/103411719:06
DiopterTechnically, nova-api-metadata runs on 877519:07
UICTamaleI think I figured it out19:07
DiopterWhich you need a DNAT rule for19:07
Diopter169.254.169.254:80 -> x.x.x.x:877519:07
UICTamaleBoth my controller+compute node AND my standalone compute node have the same private IP for the br100 connection19:07
DiopterWhich is often bound to 127.0.0.1 or 0.0.0.0 (depending on nova.conf flags)19:07
UICTamalethus, when the VM on my stand-alone compute node hits the routing table looking at the bridge address, it doesn't find the controller node19:07
ijwDiopter: essex or Folsom?  Cos at least with Ubuntu / Essex it appears that nova-api and nova-api-metadata conflict.19:08
*** ecarlin_ has joined #openstack19:09
*** dubsquared has quit IRC19:09
*** dubsquared has joined #openstack19:10
*** ecarlin has quit IRC19:10
*** ecarlin_ is now known as ecarlin19:10
UICTamalewhere does nova_network get the IP for the bridge from?  flat_network_dhcp_start ?19:10
Diopterijw: Essex, and when I'm running an "all-in-one", it's got: nova-api-ec2, nova-api-os-compute, nova-api-os-volume, nova-api-metadata running together.19:10
*** johnpostlethwait has joined #openstack19:10
DiopterUICTamale: Yeah. That and the nova network in the database.19:10
UICTamaleshould I expect the two nova-network instances to cooperate?19:11
DiopterUm... nova-network should only be running on compute nodes when you're running FlatDHCP/multi_host and don't have compute on your controller19:11
DiopterDon't need it (or likely any bridges) on nodes that are purely controllers19:11
UICTamaleI do have compute on my controller19:11
DiopterAha19:11
DiopterSo you have an all-in-one + extra computes19:12
UICTamaleyes19:12
DiopterThey'll cooperate, then, if their nova.conf networking sections match.19:12
UICTamaleand the VMs on the 'all-in-one' is good19:12
DiopterCool.19:12
UICTamalebut they appear to have given the same IP to the br100 interface on the two machines19:12
UICTamalewhich can't be good19:12
ijwDiopter: is your set up single or multinode?19:13
Diopterijw: Got both. I kicked a single a little bit ago to check something. I actually work on deployment tools for a living19:13
DiopterSo, in short, "yes"19:13
UICTamaleok - this is where it will all hopefully make sense for me - how does the nova-network process on extra nodes "talk to" the nova-network process on the controller?19:14
UICTamaleis that what the nova_network=xxx nova.conf setting comes into play?19:14
lipinski1ok - narrowed my RPC problem down.  It only happens when the VM is started on a compute node that is not the controller node.  I can start VMs on the controller node.19:15
*** japage has quit IRC19:15
DiopterUICTamale: nova-network actually talks to various APIs and does some RPC, regardless of which node it's on, so it coordinates things through the database that way19:16
UICTamaleDiopter: Ok, thanks19:16
DiopterUICTamale: There's network tables that store information such as which IPs are reserved, leased, etc19:16
*** nati_uen_ has joined #openstack19:16
*** nati_ueno has quit IRC19:16
UICTamaleI saw that19:16
UICTamaledigging through mysql19:17
ijwRight, well that helped, with some confusion.  There's the single nova-api package, it was installed, it's listening on 8775, so the only thing that's missing is the rewrite rule (which seems to have gone awol)19:17
UICTamalewhatever is supposed to route stuff from the VMs on my 'extra nodes' to the controller is broken19:17
Diopterijw: nova-network should add it for you, I believe. Try restarting it?19:17
ijwok19:17
*** dubsquared1 has joined #openstack19:18
*** dubsquared has quit IRC19:18
*** joebaker has quit IRC19:20
*** manu-db has left #openstack19:21
*** ryanpetrello has joined #openstack19:21
*** torgomatic has quit IRC19:23
*** torgomatic has joined #openstack19:24
*** natea has joined #openstack19:24
*** kmwhite has quit IRC19:25
*** desai has quit IRC19:27
*** dwcramer has quit IRC19:28
*** pixelbeat has quit IRC19:28
*** desai has joined #openstack19:28
Math___who here has the biggest usable disk space in swift ? I have 120TB raw and 40TB usable and I am wondering at which point we'll notice performance degradation19:29
*** ecarlin has quit IRC19:31
*** renier_ has joined #openstack19:32
*** renier has quit IRC19:33
*** dubsquared1 has quit IRC19:33
*** rnorwood1 has quit IRC19:33
*** dubsquared has joined #openstack19:34
*** AlanClark has joined #openstack19:34
*** FlorianOtel has quit IRC19:35
*** ecarlin has joined #openstack19:35
*** dwcramer has joined #openstack19:35
*** rnorwood has joined #openstack19:37
*** renier_ has quit IRC19:37
*** renier has joined #openstack19:38
notmynameMath___: swift is designed to be horizontally scalable (ie no degradation as you add nodes. in fact performance may get better for some use cases)19:38
notmynameMath___: are you seeing some poor performance? are you looking at growing your cluster?19:38
*** ecarlin has quit IRC19:39
*** rnorwood1 has joined #openstack19:40
uvirtbotNew bug: #1034129 in tempest "Whitebox tests are failing in jenkins" [Undecided,New] https://launchpad.net/bugs/103412919:41
*** rnorwood has quit IRC19:41
*** steveb_ has joined #openstack19:42
*** cloudvirt has quit IRC19:42
UICTamaleok, nova-network keeps clobbering my bridge IPs when it makes new VMs19:43
*** sacharya has quit IRC19:43
*** heckj has quit IRC19:44
*** bbcmicrocomputer has quit IRC19:44
DiopterUICTamale: It should be setting it for you. If you're setting it yourself, something is wrong somewhere.19:44
DiopterUICTamale: I only suggested you do so as a testing measure19:44
*** natea has quit IRC19:44
UICTamaleDiopter: I know, but when I tried letting it set itself, both instances grabbed the same IP19:45
UICTamaleso I wanted to try setting it manually one more time19:45
*** vila has quit IRC19:46
ev0ldaveyou can set the flag in nova db fixed ips as reserved that your bridge interfaces are using19:46
UICTamalealso, when I don't define an IP in /etc/network/interfaces, it doesn't come up until I run 'ip l s dev eth1 up'19:46
UICTamaleev0ldave: Yup, that's what I did19:46
*** cloudvirt has joined #openstack19:46
*** tongli has quit IRC19:47
*** ecarlin has joined #openstack19:47
ijwDiopter: cheers for the pointers.  It's still not working but I have a good handle on what to look at now, anyway19:48
Diopterijw: Np, good luck19:48
UICTamaleOk, I tried reserving the IPs again19:50
uvirtbotNew bug: #1034130 in openstack-ci "find out why git operations from oneiric hosts are slow" [Critical,Triaged] https://launchpad.net/bugs/103413019:51
*** anotherjesse has joined #openstack19:51
UICTamaleCan anyone think of a reason nova-network would be able to create the br100 interface and link it to eth1, but NOT be able to bring it up?19:51
*** anotherjesse has left #openstack19:51
ijwUICTamale: evil pixies19:52
*** rkukura has quit IRC19:52
*** clopez has quit IRC19:52
UICTamaleI'll call the externimator19:52
ijwIt's for the best.  They get everywhere.19:52
DiopterUICTamale: You probably want to set eth1 up yourself, but not give it an IP, in /etc/network/interfaces.19:53
DiopterUICTamale: http://wiki.debian.org/NetworkConfiguration#Bringing_up_an_interface_without_an_IP_address is one approach, for instance.19:54
UICTamaleNice19:54
trygvisUICTamale: I don't know if it is supposed to19:54
trygvisyeah, what Diopter said19:54
UICTamaleI'll try that soon - for now I'm trying to explicitly set everything I can to rule things out19:54
UICTamaleFor instance, now everything has a unique IP19:54
*** sandywalsh_ has quit IRC19:55
UICTamalebut I'm still getting:  cloud-init-nonet waiting 120 seconds for a network device.19:55
UICTamalecloud-init-nonet gave up waiting for a network device.19:55
Math___notmyname:  I am looking at a 4.4PB (usable)/ 13.2 PB (raw) setup, accross 3k nods19:56
Math___notmyname: yes, it will be a mess to manage19:56
Math___I have ok performance right now, I have the belief it can be better however19:57
*** colinmcnamara has joined #openstack19:57
lipinski1can I force nova to create an instance on a particular host?19:57
*** dendrobates is now known as dendro-afk19:59
*** ecarlin has quit IRC20:02
*** Domin has joined #openstack20:03
UICTamaleanyone have any more ideas?20:05
*** devcamca- is now known as devcamcar20:06
*** ecarlin has joined #openstack20:06
*** kindaopsdevy has joined #openstack20:06
*** kindaopsdevy has quit IRC20:06
*** devcamcar is now known as devcamca-20:06
*** heckj has joined #openstack20:06
*** kindaopsdevy has joined #openstack20:06
*** kindaopsdevy_ has joined #openstack20:06
*** devicenull has joined #openstack20:07
*** devcamca- is now known as devcamcar20:07
lipinski1Why would DHCP offers be seen on the host, but not on the VM?  I see the nova-network host offering a DHCP IP, but the VM OS is not receiving it.  It's a simple Ubuntu image.20:08
*** joshuamckenty has joined #openstack20:08
devicenullI'm having trouble using the /servers/ips API call.. I'm seeing this error in the log: http://paste.openstack.org/show/20023/20:08
devicenulldoes anyone know what that actually means?20:08
*** nmistry has joined #openstack20:10
*** curahack1 has joined #openstack20:13
*** pvankouteren has joined #openstack20:14
*** reed has quit IRC20:14
*** ecarlin has quit IRC20:16
*** ecarlin has joined #openstack20:16
UICTamaleI made a forum post with all the details / settings in one place.. that'll probably help20:16
UICTamalehttp://forums.openstack.org/viewtopic.php?f=9&t=1456#p389520:16
*** ecarlin_ has joined #openstack20:17
*** ecarlin__ has joined #openstack20:18
*** ecarlin has quit IRC20:20
*** ecarlin__ is now known as ecarlin20:20
uvirtbotNew bug: #1034143 in openstack-manuals "Task: remove nova-manage commands that are no longer available in Folsom" [High,Triaged] https://launchpad.net/bugs/103414320:21
*** ecarlin_ has quit IRC20:22
*** wiliam has quit IRC20:22
*** melmoth has quit IRC20:24
*** renier has quit IRC20:25
*** renier has joined #openstack20:25
uvirtbotNew bug: #1034144 in openstack-manuals "Task: remove ambiguity about memcache and swift configuration" [Medium,Confirmed] https://launchpad.net/bugs/103414420:25
*** sacharya has joined #openstack20:27
*** msinhore has quit IRC20:27
*** issackel_ has quit IRC20:28
*** devicenull has left #openstack20:28
*** Glace has quit IRC20:31
*** Glace has joined #openstack20:31
*** tuf8 has quit IRC20:33
anticwwrt to cinder, what's the expected work flow if i want to take a volume/snapshot and move it into something visible to glance?20:34
anticwvs just being able to create new vm's from and existing snapshot (ie .create_volume_from_snapshot)20:35
*** msinhore has joined #openstack20:41
*** dwcramer has quit IRC20:42
*** ryanpetrello has left #openstack20:42
*** ewindisch has quit IRC20:44
*** imsplitbit has quit IRC20:44
*** ewindisch has joined #openstack20:44
*** silkysun has quit IRC20:45
*** jaypipes has quit IRC20:46
*** mattstep has quit IRC20:47
*** issackel_ has joined #openstack20:48
*** KarinLevenstein has quit IRC20:48
*** acadiel^ has joined #openstack20:48
*** acadiel has quit IRC20:48
*** acadiel^ is now known as acadiel20:48
*** colinmcnamara has quit IRC20:49
*** rmartinelli has quit IRC20:49
*** bsza has joined #openstack20:50
*** lazyshot has quit IRC20:50
*** colinmcnamara has joined #openstack20:51
*** lazyshot has joined #openstack20:51
*** aliguori has quit IRC20:53
*** mrjazzcat has quit IRC20:53
*** bsza has quit IRC20:54
*** dwcramer has joined #openstack20:55
*** arBmind has joined #openstack20:56
*** ecarlin has quit IRC20:57
*** markvoelker has quit IRC20:57
*** phschwartz-rm has quit IRC21:00
*** vmlinuz has quit IRC21:00
*** phschwartz-rm has joined #openstack21:00
UICTamalewell, I went back to square one and tried everything WITHOUT running nova-network on the other nodes21:01
UICTamaleso far, I haven't run into a single problem this time.21:01
*** cloudvirt has quit IRC21:01
*** tan has quit IRC21:01
DiopterUICTamale: Sounds like you don't have multi_host=true21:01
UICTamaleI had to turn it back to false for this to work21:02
UICTamalebut I checked that it was set to true21:02
UICTamale(earlier)21:02
UICTamalewhat do I gain by running multiple nova-networks?21:02
*** nelson1234 has joined #openstack21:03
*** edygarcia has quit IRC21:03
UICTamaleI mean, I'm just as dependent on the controller with multiple nova-networks as I would be on a single nova-network instance, right?21:03
*** ecarlin has joined #openstack21:04
*** maoy has quit IRC21:04
*** joshuamckenty has quit IRC21:04
*** ecarlin has quit IRC21:05
*** tan has joined #openstack21:05
DiopterUICTamale: Dependent for things like API/RPC, but that's only if you want to make changes. Otherwise, VMs will independently route and have working networking in multi_host mode, with or without the controller21:05
DiopterUICTamale: Plus, without multi_host, the controller is a huge bottleneck21:06
*** ecarlin has joined #openstack21:06
UICTamaleDiopter: Well crap.  I better figure it out then21:06
DiopterMmhmm.21:06
*** mrjazzcat has joined #openstack21:06
UICTamaleI was just so happy to finally make a second VM this also came up successfully :|21:06
Diopter!multi_host + FlatDHCP = controller is the gateway for all compute nodes, which is probably why your bridges weren't getting IP'd21:06
openstackDiopter: Error: "multi_host" is not a valid command.21:06
DiopterBecause they *just* bridge in that setup21:07
DiopterExcept on the controller21:07
DiopterPlus, you don't run nova-network on them21:07
*** BlackMaria1 has quit IRC21:07
DiopterSo, make sure multi_host is enabled in *all* nova.conf's where nova-compute is being run, and make sure nova-network is also running on those nodes.21:07
DiopterThen you should get your bridges auto-IP'd with independent routing from each other, as expected.21:07
UICTamaleOk, I'll give it one more shot.21:08
UICTamaleCan you clear something up>?21:08
UICTamalehttp://unchainyourbrain.com/openstack/13-networking-in-nova21:08
DiopterUICTamale: http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html21:08
Diopter^ Option 121:08
uvirtbotDiopter: Error: "Option" is not a valid command.21:08
*** joshuamckenty has joined #openstack21:08
UICTamaleha, same diagram21:08
Diopteruvirtbot, openstack: Quit yer whining, bots!21:08
uvirtbotDiopter: Error: "openstack:" is not a valid command.21:08
UICTamalelol21:08
*** maurosr has quit IRC21:08
UICTamalenotice how he doesn't give his eth1s the same network as his dhcp21:09
UICTamaleyou suggested I needed to21:09
*** capricorn_1 has quit IRC21:10
DiopterUICTamale: Well, the FlatDHCP + multi_host with 2 NICs (public/private) networking setup is supposed to work like this:21:10
*** RicardoSSP has joined #openstack21:12
*** RicardoSSP has joined #openstack21:12
Diopterpublic_interface=eth0, flat_interface=eth1, flat_network_bridge=br100, say21:12
*** markmcclain has quit IRC21:12
*** colinmcnamara has quit IRC21:12
DiopterUICTamale: Similar to this: http://docs.openstack.org/trunk/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html21:12
UICTamaleYou know what I need?  Example /etc/network/interface and /etc/nova/nova.conf files for that option 1 diagram21:13
Diopter(check the config in the lower section)21:13
contextew. makes more sense to run nova-network everywhere21:13
Dioptercontext: Much more.21:13
DiopterUICTamale: So, there's nova.conf examples there, and I'm suggesting you don't need anything in /etc/network/interfaces for the flat interface or bridge you choose.21:14
DiopterUICTamale: The idea here is... when nova-network spins up with this sort of nova.conf (and multi_host enbled), it's going to try to bring up the flat interface, create the flat bridge, add the interface in, IP the bridge out of the fixed network, then setup dnsmasq to DHCP off the bridge for VMs21:15
contextim getting nova-volume running tonight, then to get a vm running21:15
*** nati_uen_ has quit IRC21:15
DiopterUICTamale: The theory being, of course, that the flat interface can have traffic bridged through it, either just to other hosts or to some routed gateway (which requires a few more steps)21:15
UICTamaleDiopter: So, the problem had to be internal to nova-network in my case21:15
DiopterUICTamale: I think you pretty much had to have multi_host not enabled right or not enabled everywhere, to explain the behavior you saw21:16
*** colinmcnamara has joined #openstack21:16
DiopterIt brought up your bridge, but didn't IP it, and only your all-in-one node worked21:16
DiopterSounds just like regular FlatDHCP21:16
UICTamaleOk, well if I were to start from scratch one more time, when should I do the nova-manage network add command?21:16
UICTamalebefore or after I spin up the nova-network on the second node?21:17
DiopterUm, before you spin it up anywhere, probably!21:17
Diopterlol21:17
*** nati_uen_ has joined #openstack21:17
UICTamalehaha21:17
UICTamaleok21:17
DiopterWe use chef and I'm uncertain offhand if our cookbooks do it before or after, but, I'm 99% sure they restart nova-network after they add a network.21:18
*** jathanism is now known as zz_jathanism21:18
DiopterMaybe not in Essex now... that behavior's changed a bit over the past year21:18
UICTamaleDiopter: Yeah, it would have to be restarted to create the proper bridge21:18
UICTamaleI'll use whether the bridge is created as my litmus test21:18
UICTamaleYou say I shouldn't have to define the eth1 OR bridge in interfaces?21:18
*** MarcMorata has quit IRC21:19
DiopterRight. Just spin up a VM to test21:19
DiopterBecause that's the code path that does the bridgy goodness21:19
UICTamaleaight21:20
UICTamaleYou've given me the strength to carry on when I thought I was a goner..21:20
UICTamale:)21:20
Diopterhehe21:20
UICTamaleYour patience is impressive - I probably would've given up on me by now.21:20
Diopterj00 can do eet!21:20
DiopterEh, I'm writing networking udebs for the debian-installer while helping you. It's kind of the same work, just in two windows instead of one :P21:21
UICTamalehaha21:21
UICTamaleGlad I'm asking in here today then.21:21
*** japage has joined #openstack21:21
UICTamaleI hope in the end I can help others with a clearer guide21:21
UICTamale / update the official docs21:21
ijwPersonally speaking I just wandered off with a glass of wine, it seemed far more attractive.  (And the iptables add in nova-net just isn't running, which is odd.)21:21
*** markmcclain has joined #openstack21:22
*** lorin1 has quit IRC21:22
*** ecarlin has quit IRC21:22
*** jrwren has joined #openstack21:23
annegentleUICTamale: yes please update official docs - http://wiki.openstack.org/Documentation/HowTo21:24
Diopterannegentle will bake you cookies if you do21:24
Diopter(offer not available in all locations, supplies limited)21:24
annegentleDiopter: ha ha you obviously haven't tasted my baking :)21:25
UICTamalemulti_host=true isn't even mentioned in that section of the docs - i only found it here:  http://docs.openstack.org/essex/openstack-compute/admin/content/compute-options-reference.html21:25
ijwWhich reminds me, I couldn't find a decent bit of documentatnion on nova secgroup-add-rule anywhere.21:25
UICTamaleso yes, you better believe I'm gonna update that shit lol21:25
*** notmyname has quit IRC21:25
*** h0cin has quit IRC21:25
*** notmyname has joined #openstack21:25
*** ChanServ sets mode: +v notmyname21:25
*** sandywalsh has joined #openstack21:26
*** k0rupted has quit IRC21:26
*** rmartinelli has joined #openstack21:27
UICTamaleif I have fixed_range = 100.100.100.0/24, should I make network size 254 or 255 ?21:27
*** lts has quit IRC21:27
UICTamaleand dhcp_start should be 100.100.100.2 right ?21:27
*** colinmcnamara has left #openstack21:27
zykes-UICTamale: sounds about correct21:27
DiopterUICTamale: Depends on your node count. If you set the dhcp_start to .2, but you have two nodes, in multi_host, each of their bridges needs a unique IP in the fixed network, so you probably want something more like dhcp_start=x.x.x.1021:29
UICTamaleahh21:30
UICTamaleof course.21:30
uvirtbotNew bug: #1034158 in python-swiftclient "V2 auth may be used even if -V 1.0 is used" [Undecided,New] https://launchpad.net/bugs/103415821:30
*** rnorwood1 has quit IRC21:30
DiopterUICTamale: And as for network size, I believe it's 2^mask-2. So in this case, 25421:30
*** curahack1 has quit IRC21:30
contextso question, with all compute nodes running nova-network, do you have to configure each one with a different floating_range ?21:30
Dioptercontext: No. They share the floating range(s).21:31
DiopterAnd the fixed network(s)21:31
contextkk, so the nova-network daemons talk to eachother21:31
DiopterRight.21:31
DiopterOr more accurately21:31
UICTamaleI'll start the dhcp at 65 since that's where my floating IPs start too ;)21:31
Dioptertalk to the same controller API/RPC and DB21:31
*** rnorwood has joined #openstack21:31
contextdiopter: kk gotcha21:31
*** colinmcnamara has joined #openstack21:31
contexti dont totally get the point of the bridge device21:32
*** epim has quit IRC21:33
*** rnorwood has quit IRC21:33
*** capricorn_1 has joined #openstack21:33
ev0ldavethe bridge device allows traffic without natting21:34
Dioptercontext: So you don't have to have multiple vnet# devices each with unique IPs on them, for each VM to talk through.21:34
*** sandywalsh has quit IRC21:34
Dioptercontext: (slightly wrong, but right in this case): Interfaces on Linux by default are L3 devices if they have IPs on them, and L2 if not.21:34
*** dachary has quit IRC21:35
Dioptercontext: If they didn't have an IP, but weren't in a bridge, their ethernet frames would go nowhere.21:35
*** notmyname has quit IRC21:35
Dioptercontext: So by bridging them to other interfaces, you can forward frames (if you don't have a bridge IP), or route without NAT (if you do have a bridge IP)21:35
*** ejat has joined #openstack21:36
*** notmyname has joined #openstack21:36
*** ChanServ sets mode: +v notmyname21:36
*** joshuamckenty has quit IRC21:38
*** tdowg1 has quit IRC21:38
*** rnirmal has quit IRC21:39
*** Ryan_Lane1 has joined #openstack21:39
UICTamaleOk, I have to go for today - but I got close.21:39
UICTamalenova-network made the bridge this time21:39
UICTamaleand gave it a good ip21:39
DiopterSweet21:39
DiopterYou're basically there.21:40
UICTamalebut my second VM still shows this:  2012-08-07 21:39:09,965 - DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused]21:40
*** Ryan_Lane2 has joined #openstack21:40
UICTamalemy first VM, created on node01, worked21:40
DiopterIn multi_host, you'll also need to run the metadata service everywhere21:40
UICTamaleoh!!!21:40
Diopter(everywhere you run compute and network)21:40
Diopter:)21:40
UICTamalewhy didn't anyone tell me this!?!?21:41
*** Ryan_Lane2 has quit IRC21:41
*** Ryan_Lane2 has joined #openstack21:41
DiopterIt's an "HA" networking mode in the sense that each node is independent.21:41
*** Ryan_Lane has quit IRC21:41
uvirtbotNew bug: #1034161 in quantum "some platforms do not support namespaces" [High,Confirmed] https://launchpad.net/bugs/103416121:41
UICTamalelmao21:41
*** Ryan_Lane2 is now known as Ryan_Lane21:41
DiopterCompute, networking, metadata, routing, NAT, DHCP...21:41
UICTamaleok, that's TOTALLY understandable21:41
Diopteryarp :D21:41
DiopterJust not necessarily intuitive at first glance21:41
UICTamalewhich packages to install then along with nova-network and nova-compute?21:41
*** renier has quit IRC21:42
*** renier_ has joined #openstack21:42
*** ecarlin has joined #openstack21:42
ev0ldavewhat does multi_host do21:43
*** retr0h has quit IRC21:43
UICTamaleeverything, it turns out ;P21:43
contextuictamale: what OS21:43
DiopterUm... packagewise... not sure. Perhaps nova-api? What you want is nova-api-metadata.21:43
contextive been following this using the debian-beta install release: http://wiki.debian.org/OpenStackHowto21:44
*** Ryan_Lane1 has quit IRC21:44
Diopterev0ldave: FlatDHCP + multi_host lets each compute host do its own NAT/routing without using the controller as a gateway21:44
*** pixelbeat has joined #openstack21:44
*** rkukura has joined #openstack21:44
contextdiopter: gotcha i think21:46
*** cloudvirt has joined #openstack21:46
* ijw corwns Diopter master of networking.21:46
ijwcrowns21:46
ijwMm, wine.21:46
contextanyone know if volume.NexentaDriver requires iscsi-target21:47
ijwAnd, now that you have received the ultimate accolade, can you tell me if there's a networking mode that just bridges the VMs to an interface without an intervening NAT layer, or whether I'll have to cheat?21:47
Diopterijw: lol. I'd hope I'm decent at it, it's kind of what I get paid for ;P21:47
*** Gordonz has quit IRC21:47
ev0ldavethey dont nat ijw21:47
ev0ldavethe only NATing is a one to one from public to private21:48
ev0ldavethe private is either bridged or assigned on a vlan21:48
Diopterijw: Yes. Flat or FlatDHCP without multi_host will just bridge, though without manually specifying otherwise, the default gateway (in FlatDHCP) that dnsmasq requires is the IP of the controller node, which is why I said without multi_host the controller is the gateway.21:49
*** s0mik has quit IRC21:49
Diopterijw: FlatDHCP is probably the most common and easiest networking mode to use, so going with the simple setup you'll end up bottlenecking on your controller. But if you arrange things so that the default gateway is the IP of an upstream router/LB/firewall, that's a fairly common config.21:49
*** s0mik has joined #openstack21:50
Diopterijw: Because then compute hosts just bridge VM's to whatever physical NIC they have that's on the same L2 fabric as their upstream device.21:50
ijwDiopter: in this instance I actually want to attach something to the L2 network that the VMs are on.21:50
* Diopter nods21:50
ijwI think I can see how I might do that, then.  I'll have a play.21:50
*** s0mik has quit IRC21:50
ijw… When I get the sodding gateway back.21:50
Diopterijw: If you want Nova to give VM's IPs and routes, use FlatDHCP. If you don't, use Flat.21:50
Diopter(though it still can through metadata injection)21:51
*** s0mik has joined #openstack21:51
*** cloudvirt has quit IRC21:51
*** k0rupted has joined #openstack21:51
ijwDiopter: FlatDHCP makes my life simpler from a VM configuring perspective, but what I actually need from the interfaces that I intend to give interesting (as opposed to monitoring and control) traffic to is a simple unadulterated L2 network.21:52
* Diopter nods21:53
ijwWhich, I think, means I just use FlatDHCP and attach something to the backbone to pass that traffic, which should work fine.21:53
DiopterYep, just got to make sure nova's dhcpbridge is setting up dnsmasq to hand out the right gateway IPs to instances21:53
DiopterSo they hit your upstream device21:54
DiopterPretty much just nova.conf stuffs21:54
ijwConveniently the traffic is ipv6, so as long as I leave flatDHCP in its v4 config I can do what I please with the v6 routing.21:54
*** ayoung has quit IRC21:54
DiopterErm. Have you actually used ipv6 all the way to your VMs yet?21:55
ijwNope.21:55
*** rocambole has quit IRC21:55
*** dubsquared has quit IRC21:55
*** kindaopsdevy has quit IRC21:55
*** s0mik has quit IRC21:55
DiopterBecause I totally introduced a bug that I haven't tested the fix for yet. :x21:55
ijwAnd I know there are magic options for the nova-network stuff which tentative experimentation suggests doesn't actually work very well, so not passing it through nova-network has a certain attraction.21:55
*** ewindisch has quit IRC21:56
Diopterhttps://bugs.launchpad.net/nova/+bug/101113421:56
*** kindaopsdevy_ has quit IRC21:56
*** acadiel has quit IRC21:56
DiopterThis might not even affect you21:56
DiopterBut, if it does, you could try the fix I posted there21:56
ijwMight not, but equally might, since the bridge will be in use.  I'll keep a bookmark on that in case.21:57
* Diopter nods21:57
DiopterI explained the issue pretty thoroughly, and I am pretty confident in my solution, but I never got around to testing it21:58
*** dwcramer has quit IRC21:58
ijwWhy is hairpin mode on at all?21:59
DiopterTo allow for contacting your own floating IP from within an instance.21:59
*** dubsquared has joined #openstack21:59
DiopterWhich, it turns out, a lot of people wanted to do, to avoid using split-DNS21:59
ijwOh, I see.21:59
DiopterSo they could have dumb services that are mobile between VMs, which talk to each other using hostnames that resolve to public IPs21:59
DiopterWhich might, in fact, be on the same VM22:00
ijwJesus this whole setup is weird. ;)22:00
Diopterlolz22:00
DiopterWelcome to cloudy networking!22:00
ijwYou are this ip that you don't know you are.22:00
DiopterHope you enjoy your stay :P22:00
DiopterMmhmm22:00
*** kiffer84 has joined #openstack22:00
ijwI just want brainless networking from components that I piece together, and bugger everyone else with their strange cloudy app requirements.22:01
* Diopter chuckles22:01
DiopterOn the plus side22:01
ijwWhich is fine, cos I have tame quantum guys I can bitch at ;)22:01
Diopterhairpinning is entirely transparent due to my fix, unless of course you're using ICMPv6 multicast...22:01
DiopterWhich my fix for my fix should fix ;)22:01
ijwYou know, you're not selling me on the whole simplicity of approach element of this fix.22:02
*** ecarlin has quit IRC22:02
*** lborda has quit IRC22:02
Diopter>.>22:02
DiopterIdeally, I'd get the time to test it, make sure I'm right, then submit it so it's part of the project22:02
DiopterThen You wouldn't have to know or care it's there22:02
*** lazyshot has quit IRC22:02
DiopterBut I didn't get that far :P22:02
*** samkottler has quit IRC22:04
*** janisg has quit IRC22:05
*** ecarlin has joined #openstack22:05
*** kpavel has quit IRC22:06
*** kpavel_ has joined #openstack22:06
*** kpavel_ is now known as kpavel22:06
*** msinhore has quit IRC22:07
*** dolphm has quit IRC22:07
*** kindaopsdevy has joined #openstack22:08
ijwSo when shoudl the address translation get installed in iptables for the metadata, on startup or on spinning up the first machine?  Cos apparently, for me, it's not going in at all at the moment.22:09
*** markmcclain has quit IRC22:09
*** ewindisch has joined #openstack22:10
ijwNot that I'm going to find out this evening, I'm off to bed imminently.22:10
*** dendro-afk is now known as dendrobates22:10
*** llang629 has quit IRC22:11
*** llang629 has joined #openstack22:12
*** ewindisch has quit IRC22:12
Diopterijw: Not entirely sure. I believe spinning up. Haven't traced that one too closely lately.22:12
*** littleidea has quit IRC22:13
*** s0mik has joined #openstack22:13
*** roaet has quit IRC22:14
ijwFine.  I'll have a prod tomorrow, anyway, put some tracing in and see if I can spot why it's not being run.  Looks very much like the code to do the add isn't being executed at the moment, which is bound to be some feature of my current settings.22:14
*** kindaopsdevy_ has joined #openstack22:15
*** gyee has quit IRC22:15
Diopterijw: Actually, wait, if you're not doing multi_host, your bridges won't have IPs except on your controller, which ostensibly is the only place metadata runs as well22:16
Diopterijw: So iptables rules won't exist for metadata NAT on the compute hosts22:16
JoeJulianIs there a network model that doesn't nat the public ip addresses? SIP/RTP doesn't seem to handle that very well.22:16
DiopterJoeJulian: What public IP addresses? Floating?22:17
JoeJulianI'm hoping to design my network more along the lines of how Rackspace is doing it.22:17
JoeJulianYes, floating.22:17
*** kindaopsdevy has quit IRC22:18
*** kindaopsdevy_ is now known as kindaopsdevy22:18
DiopterJoeJulian: Well, I work for Rackspace, so I might be able to help a little, though I'm about to head home from work in ~15 min :P22:18
DiopterJoeJulian: Are you hoping to assign public IPs directly to VMs?22:19
JoeJulianEh, I'll hit you up tomorrow then. I'll be here. And yes, public ip's directly to vms.22:19
*** rnorwood has joined #openstack22:19
JoeJulianButI'll be using kvm instead of Xen.22:19
*** renier_ has quit IRC22:19
*** kindaopsdevy has quit IRC22:19
*** sacharya has quit IRC22:19
*** kindaopsdevy has joined #openstack22:19
*** ev0ldave has quit IRC22:20
*** tgall_foo has quit IRC22:20
*** zz_jathanism is now known as jathanism22:20
*** s0mik has quit IRC22:20
*** samkottler has joined #openstack22:21
DiopterJoeJulian: You're basically looking at something like FlatDHCP + multi_host=true flag, single-NIC setup (combined control/VM plane; unless you want to invert the usual public/private 2-nic setup), with the fixed network being a public range, and arranging the related routing/NAT'ing flags appropriately22:21
DiopterJoeJulian: To avoid NAT/SNAT22:22
DiopterThough there are other solutions.22:22
DiopterBut anyhow, off for now. Later!22:22
JoeJulianMakes sense. Thanks22:22
*** renier has joined #openstack22:23
*** s0mik has joined #openstack22:23
*** KarinLevenstein has joined #openstack22:25
*** sdomme has quit IRC22:28
*** KarinLevenstein has quit IRC22:31
*** samkottler has quit IRC22:31
*** desai has quit IRC22:33
*** jathanism is now known as zz_jathanism22:34
*** troytoman is now known as troytoman-away22:35
*** nhm has joined #openstack22:37
*** dubsquared has quit IRC22:38
*** edygarcia has joined #openstack22:41
*** eglynn has quit IRC22:41
*** Blackavar has quit IRC22:41
*** s0mik has quit IRC22:42
*** rnorwood has quit IRC22:42
*** AlanClark has quit IRC22:44
*** rpawlik has joined #openstack22:45
*** dubsquared has joined #openstack22:45
*** edygarcia has quit IRC22:46
s34nif I run a second node with devstack and point it to the controller, it should show up in horizon, right?22:47
*** allsystemsarego has quit IRC22:49
*** matwood has quit IRC22:49
*** rmartinelli has quit IRC22:51
*** mattray has quit IRC22:52
*** halfss has joined #openstack22:55
*** e1mer has joined #openstack22:57
ijwUm, basically yes, but I don't think there's a node list in horizon, is there?23:01
*** tix has joined #openstack23:02
*** miclorb has joined #openstack23:02
*** mikal has quit IRC23:02
*** mikal has joined #openstack23:04
*** lloydde has quit IRC23:05
*** tomoe_ has joined #openstack23:06
*** halfss has quit IRC23:07
*** mikal has quit IRC23:09
*** dubsquared has quit IRC23:10
*** mikal has joined #openstack23:10
*** arBmind has quit IRC23:11
*** warik has joined #openstack23:12
*** dubsquared has joined #openstack23:15
*** mgriffin is now known as mgriffin_23:16
contexthmm, nova-cert started running on my second compute node, i only need node-cert on the 'master' right ?23:17
contexterr scratch that23:21
*** dubsquared has quit IRC23:21
contextim installing using the debian 7 beta, but nova-volume cannot find the NexentaDriver23:21
*** mnewby has quit IRC23:23
contextnm23:23
*** MarkAtwood has left #openstack23:23
*** marrusl has quit IRC23:23
*** mnewby has joined #openstack23:24
*** acadiel has joined #openstack23:24
*** dendrobates is now known as dendro-afk23:25
*** mnewby has quit IRC23:25
*** albert23 has left #openstack23:26
*** hunglin has quit IRC23:30
*** hunglin has joined #openstack23:30
*** dwcramer has joined #openstack23:30
tixlol23:30
*** littleidea has joined #openstack23:33
*** hunglin has quit IRC23:35
*** ecarlin has quit IRC23:35
*** RicardoSSP has quit IRC23:36
*** warik has left #openstack23:40
*** alanmac has quit IRC23:44
*** mnewby has joined #openstack23:45
contextnice. my NexentaStor wont start any services23:50
*** dprince has quit IRC23:57
*** mjfork has quit IRC23:57
*** mjfork has joined #openstack23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!