*** py has quit IRC | 00:03 | |
*** nati2 has quit IRC | 00:04 | |
*** Razique has joined #openstack | 00:09 | |
sniperd | Im trying to find out why the ppa .deb packages for 1.4.3 do NOT contain the swauth-* command line tools | 00:11 |
---|---|---|
*** py has joined #openstack | 00:11 | |
*** tcampbell has quit IRC | 00:11 | |
*** Razique has quit IRC | 00:12 | |
*** krow1 has joined #openstack | 00:15 | |
*** krow has quit IRC | 00:16 | |
sniperd | mtaylor: any chance you can point me in the right direction? | 00:17 |
*** dosdawg has joined #openstack | 00:18 | |
pandemicsyn | sniperd: swauth is a separate non-openstack package, not sure exactly when it got pulled out (sometime around may). | 00:19 |
sniperd | pandemicsyn: its not included when using tempauth? | 00:19 |
pandemicsyn | sniperd: nope | 00:20 |
sniperd | pandemicsyn: Oh ok, thank you | 00:20 |
pandemicsyn | sniperd: https://github.com/gholt/swauth , http://gholt.github.com/swauth/ | 00:20 |
*** rnorwood has joined #openstack | 00:21 | |
*** hggdh has quit IRC | 00:21 | |
*** anotherjesse has quit IRC | 00:21 | |
mtaylor | sniperd: what pandemicsyn said | 00:21 |
*** hugo_kuo has joined #openstack | 00:21 | |
*** hggdh has joined #openstack | 00:22 | |
*** osadmin has joined #openstack | 00:24 | |
osadmin | Hi folks need a little help with setting up a dhcp range for a project area anyone help? | 00:25 |
osadmin | network is 10.120.16.0/20 and I want flatdhcp assignment of addresses to range from 10.120.16.50 - 10.120.16.60 | 00:26 |
*** robmalchow has quit IRC | 00:27 | |
*** reed has quit IRC | 00:27 | |
*** epsas has joined #openstack | 00:29 | |
*** redconnection has joined #openstack | 00:30 | |
*** yshh has joined #openstack | 00:30 | |
osadmin | hi anyone know how to assign ip block ranges for VMs? | 00:31 |
*** krow1 has quit IRC | 00:31 | |
*** vladimir3p has quit IRC | 00:34 | |
*** nati2 has joined #openstack | 00:34 | |
*** reed has joined #openstack | 00:40 | |
*** ldlework has joined #openstack | 00:41 | |
*** rnorwood has quit IRC | 00:42 | |
*** ccustine has quit IRC | 00:43 | |
*** spackest has quit IRC | 00:44 | |
*** nati2 has quit IRC | 00:46 | |
*** yshh has quit IRC | 00:46 | |
*** nati2 has joined #openstack | 00:46 | |
*** ldlework has quit IRC | 00:49 | |
*** ldlework has joined #openstack | 00:49 | |
*** Zven has quit IRC | 00:50 | |
*** Zven has joined #openstack | 00:52 | |
*** hugo_kuo has quit IRC | 00:58 | |
*** rnorwood has joined #openstack | 01:00 | |
*** tux78 has quit IRC | 01:01 | |
*** shaon has quit IRC | 01:02 | |
*** dragondm_ has quit IRC | 01:02 | |
*** rsampaio has joined #openstack | 01:04 | |
*** huslage has joined #openstack | 01:06 | |
*** sandywalsh has joined #openstack | 01:06 | |
*** huslage has quit IRC | 01:06 | |
*** cloudgeek has quit IRC | 01:08 | |
*** rnorwood has quit IRC | 01:10 | |
*** ben_duyujie has joined #openstack | 01:10 | |
uvirtbot | New bug: #901952 in openstack-ci "Jenkins should run unit tests against Python 2.6" [Undecided,New] https://launchpad.net/bugs/901952 | 01:16 |
*** jj0hns0n has quit IRC | 01:20 | |
*** ccorrigan has joined #openstack | 01:21 | |
*** afm has quit IRC | 01:22 | |
*** afm has joined #openstack | 01:22 | |
*** donaldngo_hp has quit IRC | 01:24 | |
*** cloudgeek has joined #openstack | 01:24 | |
*** dotdevops has quit IRC | 01:24 | |
*** ccorrigan has quit IRC | 01:26 | |
*** afm has quit IRC | 01:27 | |
*** mwhooker has quit IRC | 01:29 | |
*** osadmin has quit IRC | 01:30 | |
*** afm has joined #openstack | 01:32 | |
*** rustam has quit IRC | 01:32 | |
*** swill has left #openstack | 01:36 | |
*** pixelbeat has quit IRC | 01:37 | |
annegentle | Anyone else in channel also at the OpenStack-Austin meeting? #osatx tag on twitter backchannel but no IRC backchannel. :) | 01:37 |
*** dysinger has quit IRC | 01:38 | |
*** lloydde has quit IRC | 01:39 | |
*** lorin1 has joined #openstack | 01:42 | |
*** dysinger has joined #openstack | 01:45 | |
*** kaz has quit IRC | 01:45 | |
*** kaz_ has joined #openstack | 01:46 | |
*** lzyeval has joined #openstack | 01:49 | |
*** AlanClark has quit IRC | 01:50 | |
*** lorin1 has left #openstack | 01:51 | |
*** shang has joined #openstack | 02:00 | |
*** leitz has joined #openstack | 02:00 | |
*** tmichael has quit IRC | 02:01 | |
*** rnorwood has joined #openstack | 02:03 | |
*** tmichael has joined #openstack | 02:03 | |
*** donaldngo_hp has joined #openstack | 02:06 | |
*** tmichael has quit IRC | 02:08 | |
leitz | New to OpenStack. Can I run it as a proof of concept on a laptop with a P8400 chip? | 02:12 |
*** blees has joined #openstack | 02:15 | |
*** blees has quit IRC | 02:21 | |
*** osier has joined #openstack | 02:24 | |
*** po has quit IRC | 02:24 | |
*** leitz has quit IRC | 02:27 | |
*** livemoon has joined #openstack | 02:29 | |
*** maplebed has quit IRC | 02:30 | |
*** jakedahn has quit IRC | 02:40 | |
*** miclorb_ has quit IRC | 02:40 | |
*** miclorb_ has joined #openstack | 02:40 | |
*** jog0 has joined #openstack | 02:42 | |
*** jog0 has quit IRC | 02:43 | |
*** jakedahn has joined #openstack | 02:43 | |
*** jakedahn has quit IRC | 02:44 | |
*** Ryan_Lane has quit IRC | 02:45 | |
*** jakedahn has joined #openstack | 02:50 | |
*** reed has quit IRC | 02:51 | |
*** rsampaio has quit IRC | 02:53 | |
*** tmichael has joined #openstack | 02:55 | |
*** nid0 has quit IRC | 02:55 | |
cloudgeek | hey all morning | 02:56 |
hugokuo | morning | 02:56 |
*** Zven has quit IRC | 02:56 | |
cloudgeek | hugokuo: are your open-stack in production | 02:57 |
hugokuo | in development for products ...... and production nova swift as our internal IT infra now | 03:01 |
*** osadmin has joined #openstack | 03:02 | |
*** mwhooker has joined #openstack | 03:02 | |
*** andrewbogott has quit IRC | 03:04 | |
*** rsampaio has joined #openstack | 03:06 | |
*** hadrian has quit IRC | 03:06 | |
*** lorin11 has joined #openstack | 03:07 | |
*** zigo has joined #openstack | 03:07 | |
*** rnorwood has quit IRC | 03:08 | |
*** rods has quit IRC | 03:12 | |
*** rnorwood has joined #openstack | 03:13 | |
*** jakedahn has quit IRC | 03:17 | |
*** rsampaio has quit IRC | 03:25 | |
*** nati2_ has joined #openstack | 03:27 | |
*** nati2 has quit IRC | 03:28 | |
*** scottjg has joined #openstack | 03:32 | |
*** AaronSchulz is now known as Aaron|away | 03:36 | |
*** Aaron|away has quit IRC | 03:40 | |
*** helfrez has quit IRC | 03:42 | |
*** redconnection has quit IRC | 03:42 | |
*** supriya has joined #openstack | 03:42 | |
*** tmichael has quit IRC | 03:53 | |
*** osier has quit IRC | 04:13 | |
*** lorin11 has quit IRC | 04:16 | |
*** miclorb_ has quit IRC | 04:16 | |
*** PeteDaGuru has quit IRC | 04:23 | |
*** Ryan_Lane has joined #openstack | 04:28 | |
*** adjohn has joined #openstack | 04:39 | |
*** maplebed has joined #openstack | 04:41 | |
*** maplebed has quit IRC | 04:41 | |
*** tightwork has joined #openstack | 04:46 | |
tightwork | sup | 04:46 |
*** kbringard has joined #openstack | 04:51 | |
*** dnjaramba has quit IRC | 04:52 | |
*** cstacker has joined #openstack | 04:54 | |
*** Rajaram has joined #openstack | 04:55 | |
cstacker | Can anyone help me figure out why my OS install isn't quite right? I can't connect server 2 | 04:57 |
*** cstacker has quit IRC | 05:01 | |
*** dnjaramba has joined #openstack | 05:02 | |
*** wariola has quit IRC | 05:06 | |
*** gavri1 has joined #openstack | 05:08 | |
*** mjfork has quit IRC | 05:22 | |
*** koolhead17 has quit IRC | 05:30 | |
*** donaldngo_hp has quit IRC | 05:34 | |
*** tmichael has joined #openstack | 05:37 | |
*** cloudgeek has quit IRC | 05:39 | |
*** jdurgin has quit IRC | 05:53 | |
*** cloudgeek has joined #openstack | 05:54 | |
*** jdurgin has joined #openstack | 05:58 | |
*** jdurgin has quit IRC | 05:59 | |
*** cloudgeek has quit IRC | 06:02 | |
*** rnorwood has quit IRC | 06:08 | |
*** lonetech007 has quit IRC | 06:09 | |
uvirtbot | New bug: #902011 in openstack-ci "Code coverage job for Quantum" [Undecided,New] https://launchpad.net/bugs/902011 | 06:11 |
*** zigo has quit IRC | 06:14 | |
*** osier has joined #openstack | 06:15 | |
*** cloudgeek has joined #openstack | 06:16 | |
*** nelson1234 has quit IRC | 06:19 | |
*** katkee has joined #openstack | 06:35 | |
*** koolhead17 has joined #openstack | 06:38 | |
*** koolhead17 has left #openstack | 06:38 | |
*** Ryan_Lane has quit IRC | 06:40 | |
*** Ryan_Lane has joined #openstack | 06:46 | |
*** bowtie has joined #openstack | 06:52 | |
bowtie | SanFran nova hackathon was awesome | 06:53 |
*** krow has joined #openstack | 06:53 | |
*** TheOsprey has joined #openstack | 06:54 | |
*** jj0hns0n has joined #openstack | 06:54 | |
*** krow1 has joined #openstack | 06:57 | |
*** krow has quit IRC | 06:58 | |
*** Arminder-Office has quit IRC | 07:00 | |
*** ldlework has quit IRC | 07:01 | |
*** mindpixel has joined #openstack | 07:01 | |
*** rocambol1 has joined #openstack | 07:06 | |
*** redconnection has joined #openstack | 07:14 | |
*** cloudgeek has quit IRC | 07:14 | |
*** mgoldmann has joined #openstack | 07:15 | |
*** redconnection has quit IRC | 07:17 | |
*** kaigan_ has joined #openstack | 07:25 | |
*** cloudgeek has joined #openstack | 07:30 | |
*** guigui has joined #openstack | 07:31 | |
*** bowtie has quit IRC | 07:38 | |
*** marcuz has joined #openstack | 07:41 | |
*** cloudgeek has quit IRC | 07:42 | |
*** krow1 has quit IRC | 07:50 | |
*** TeTeT has joined #openstack | 07:52 | |
*** tmichael has quit IRC | 07:59 | |
*** cloudgeek has joined #openstack | 08:00 | |
*** tmichael has joined #openstack | 08:00 | |
*** pradeep has joined #openstack | 08:02 | |
*** Ryan_Lane has quit IRC | 08:02 | |
*** nati2_ has quit IRC | 08:06 | |
*** DavorC has quit IRC | 08:06 | |
uvirtbot | New bug: #902052 in openstack-ci "Gerrit should support private reviews for security bugs" [Wishlist,New] https://launchpad.net/bugs/902052 | 08:06 |
*** adjohn has quit IRC | 08:14 | |
uvirtbot | New bug: #902058 in tempest "Tempest has many errors while running code from master" [Undecided,New] https://launchpad.net/bugs/902058 | 08:16 |
*** tightwork has left #openstack | 08:17 | |
*** reidrac has joined #openstack | 08:18 | |
*** katkee has quit IRC | 08:20 | |
*** supriya has quit IRC | 08:22 | |
*** supriya has joined #openstack | 08:23 | |
*** cloudgeek has quit IRC | 08:28 | |
*** ahasenack has joined #openstack | 08:28 | |
*** asdfasdf_ has joined #openstack | 08:35 | |
*** foexle-afk is now known as foexle | 08:39 | |
*** nerens has joined #openstack | 08:40 | |
*** cloudgeek has joined #openstack | 08:42 | |
*** ccorrigan has joined #openstack | 08:52 | |
*** TeTeT has quit IRC | 08:53 | |
*** supriya has quit IRC | 08:56 | |
*** Arminder has joined #openstack | 08:58 | |
*** supriya has joined #openstack | 08:59 | |
*** Tristani3D has joined #openstack | 09:08 | |
Tristani3D | Morning guys (at least here it is) | 09:09 |
Tristani3D | I am having an issue where the state of my instance is staying build | 09:09 |
Tristani3D | In the compute.log on the node it keeps saying ""found 1 in database and 0 in the hypervisor" | 09:10 |
Tristani3D | Where to look to see whats wrong? | 09:10 |
*** shaon has joined #openstack | 09:23 | |
*** shaon has quit IRC | 09:35 | |
tdi | win 3 | 09:35 |
*** scottjg has quit IRC | 09:52 | |
*** daysmen has joined #openstack | 09:54 | |
*** lzyeval has quit IRC | 09:54 | |
*** rustam has joined #openstack | 09:56 | |
*** Rajaram has quit IRC | 09:57 | |
*** Rajaram has joined #openstack | 09:57 | |
*** koolhead11 has joined #openstack | 09:59 | |
*** livemoon has quit IRC | 10:05 | |
*** darraghb has joined #openstack | 10:09 | |
*** katkee has joined #openstack | 10:13 | |
*** kbringard has quit IRC | 10:15 | |
*** pradeep has quit IRC | 10:16 | |
*** dysinger has quit IRC | 10:19 | |
koolhead11 | hi all | 10:21 |
*** nerens has quit IRC | 10:24 | |
*** nerens has joined #openstack | 10:25 | |
koolhead11 | ttx: hey | 10:27 |
ttx | koolhead11: yo | 10:27 |
koolhead11 | ttx: how are bugs not fixed for diablo branch now? | 10:29 |
* koolhead11 is confused | 10:29 | |
ttx | koolhead11: only selkected fixed get backported to stable/diablo | 10:30 |
ttx | selected fixes* | 10:30 |
ttx | koolhead11: you can see what's in stable/diablo using the "in-stable-diablo" tag | 10:30 |
*** Arminder has quit IRC | 10:31 | |
ttx | koolhead11: you can suggest bugs for includion using the "diablo-backport" tag | 10:31 |
soren | ttx: Why don't we use bug targeting? | 10:31 |
koolhead11 | ttx: https://bugs.launchpad.net/horizon/+bug/888385 | 10:31 |
koolhead11 | it says bug fixed | 10:32 |
koolhead11 | but when i do | 10:32 |
ttx | soren: there was a serious limitation to that. Let me remember | 10:32 |
koolhead11 | git clone https://github.com/4P/horizon | 10:32 |
ttx | koolhead11: it's supposedly fixed in essex. | 10:32 |
koolhead11 | cd horizon git checkout stable/diablo | 10:32 |
ttx | that's not the git repo you should be using | 10:32 |
ttx | "Code is available at: https://github.com/openstack/horizon" | 10:33 |
*** pradeep1 has joined #openstack | 10:33 | |
koolhead11 | ttx: am sorry but this is 3ed time its location got changed am guseeing | 10:33 |
koolhead11 | launchpad -->4P-->now this | 10:33 |
koolhead11 | :( | 10:33 |
* koolhead11 tried to get it from new location | 10:34 | |
ttx | koolhead11: all projects are under https://github.com/openstack/ | 10:34 |
*** pradeep1 has quit IRC | 10:34 | |
koolhead11 | ttx: hmm but 20 days back i suppose we all were using 4P for horizon as it was in docs i remember | 10:34 |
ttx | koolhead11: I know, it's difficult to see which one is the "main" one with github | 10:34 |
koolhead11 | ttx: will have to modify docs accordingly because am sure many would still be using the 4P location | 10:35 |
ttx | koolhead11: since then horizon was promoted to a core project | 10:35 |
ttx | and moved to main infrastructure. | 10:35 |
koolhead11 | aah. now that is a news to me. so many things is changed in 2 weeks : | 10:35 |
koolhead11 | :) | 10:35 |
koolhead11 | ttx: let me try this new repo address | 10:36 |
ttx | koolhead11: but I don't think it will be fixed in stable/diablo. The bug tracks the fix in the development version, like Ubuntu does | 10:36 |
koolhead11 | ttx: which means dash still has bug with diablo? | 10:37 |
koolhead11 | rather major bug | 10:37 |
*** ahasenack has quit IRC | 10:37 | |
koolhead11 | ttx: when are we having next meeting? | 10:38 |
ttx | koolhead11: next general meeting on Tuesday. But you can ping Horizon devs before then | 10:38 |
ttx | koolhead11: I don't know moire than you on this one -- based on the bug, I see no evidence they fixed it anywhere else than in trunk. | 10:39 |
*** ben_duyujie has left #openstack | 10:42 | |
koolhead11 | ttx: all stuff will work with the stackx.sh script because am assuming it takes everything from truck, which means essex | 10:43 |
koolhead11 | but when am using it in my existing infra on top of diablo/oneiric | 10:44 |
koolhead11 | am hitting my head agaisnt wall | 10:44 |
koolhead11 | and i also think this is one reason why packaging of dash or keystone is yet to come in working mode | 10:45 |
ttx | koolhead11: If I were you I would try to corner some horizon devs to set the expectations right | 10:46 |
koolhead11 | ttx: yes, i will do the same. :) | 10:46 |
ttx | by design, I'm focused on Essex (since that the first release of Horizon as far as I'm concerned) | 10:46 |
* koolhead11 pokes devcamcar :D | 10:46 | |
koolhead11 | ttx: true | 10:47 |
koolhead11 | ttx: first thing i would request them to make this https://github.com/4P/horizon invisible :D | 10:49 |
ttx | koolhead11: github by design considers all repos 1st-class citizens | 10:51 |
koolhead11 | ttx: but one can put his repo in peace/unavailable mode with status we have moved to new home!! :) | 10:52 |
*** ahasenack has joined #openstack | 10:55 | |
*** fabiand__ has joined #openstack | 10:58 | |
koolhead11 | ttx: so to get diablo branch after getting the horizon from https://github.com/openstack/ | 11:01 |
koolhead11 | i will so same git stable/diablo | 11:01 |
*** dnjaramba has quit IRC | 11:03 | |
*** dnjaramba has joined #openstack | 11:03 | |
*** elasticdog has quit IRC | 11:06 | |
hugokuo | What is ajax-console for ? | 11:09 |
hugokuo | any using example will be better :> | 11:10 |
*** asdfasdf_ has quit IRC | 11:10 | |
koolhead11 | hugokuo: hey there | 11:10 |
hugokuo | yes | 11:10 |
koolhead11 | ttx: as you said https://bugs.launchpad.net/horizon/+bug/888385 is not fixed for diablo :( | 11:11 |
*** pixelbeat has joined #openstack | 11:17 | |
*** donagh has quit IRC | 11:29 | |
*** fabiand__ has left #openstack | 11:30 | |
*** livemoon has joined #openstack | 11:35 | |
*** Oneiroi has joined #openstack | 11:44 | |
Oneiroi | morning everyone, finally my workk load is starting to clear, though I've probably just jinxed it by saying so now | 11:45 |
*** hingo has joined #openstack | 11:45 | |
*** kaigan_ has quit IRC | 11:49 | |
*** bsza has joined #openstack | 11:50 | |
*** AntoniHP has quit IRC | 11:57 | |
*** pradeep has joined #openstack | 11:59 | |
*** ahasenack has quit IRC | 12:04 | |
*** jakkudanieru has quit IRC | 12:15 | |
*** ninkotech has quit IRC | 12:18 | |
zykes- | hmm | 12:22 |
zykes- | i've got an iscsi initiator setup, discovered the luns and seems ok | 12:22 |
zykes- | when i do a manual login the disks show but if i do a iscsi restart it doesn't automatically login ? | 12:22 |
*** supriya has quit IRC | 12:23 | |
zykes- | it seems that if the "default" file is missing in a node config it doesn't start it automatically ? | 12:24 |
*** rods has joined #openstack | 12:24 | |
*** elasticdog has joined #openstack | 12:26 | |
*** elasticdog has joined #openstack | 12:26 | |
*** Arminder has joined #openstack | 12:27 | |
*** fridim_ has joined #openstack | 12:28 | |
zykes- | anyone got a clue ? | 12:31 |
*** PotHix has joined #openstack | 12:32 | |
*** Razique has joined #openstack | 12:32 | |
Razique | HI all | 12:32 |
Razique | My production is down, I can't run anymore instances | 12:32 |
Razique | the dhcp attribution fails with Attempting to grab semaphore "get_dhcp" for method "_get_dhcp_ip". | 12:32 |
zykes- | network problems Razique ? | 12:33 |
Razique | zykes-: yah :( | 12:33 |
Razique | vlans and bridges go up | 12:33 |
Razique | dnmasq process runs | 12:33 |
Razique | it looks like the ip attributio nfails | 12:33 |
Razique | unfortunately no error into logs | 12:33 |
foexle | salut Razique :) | 12:34 |
Razique | hey foexle | 12:34 |
Razique | that a real drama here :( | 12:34 |
Razique | the whole production is down | 12:34 |
zykes- | Razique: you good on iscsi ? | 12:34 |
Razique | zykes-: yup | 12:34 |
Razique | new test instances don't get ip | 12:34 |
Razique | from nova-network | 12:34 |
Razique | i' don't use multi_host mode | 12:34 |
koolhead11 | hola Razique | 12:35 |
Razique | hey koolhead11 | 12:35 |
zykes- | Razique: hint on why my box isn't automatically starting targets ? | 12:35 |
Razique | yesterday we've been attacked | 12:35 |
koolhead11 | i got you msg early morning | 12:35 |
foexle | Razique: stop all services => kill dnsmasq => delete vnet and bridge => restart ethx (your fixed_ip dev) => start all services again | 12:35 |
Razique | foexle: ok let's try | 12:35 |
koolhead11 | hola foexle zykes- :) | 12:35 |
foexle | hey koolhead11 :) | 12:35 |
zykes- | is there supposed to be /etc/iscsi/nodes/<ign>/<portal>/default? | 12:36 |
Razique | same 2011-12-09 13:36:26,232 DEBUG nova.utils [-] Attempting to grab semaphore "get_dhcp" for method "_get_dhcp_ip"... from (pid=12084) inner /usr/lib/python2.6/dist-packages/nova/utils.py:672 | 12:36 |
Razique | the instance doesn't get the ip | 12:36 |
*** markvoelker has joined #openstack | 12:37 | |
koolhead11 | zykes-: how have you been buddy!! :d | 12:37 |
koolhead11 | foexle: wassup? | 12:37 |
zykes- | good enough, kicking something soon for the iscsi to work, got a clue Razique ? | 12:38 |
*** mcclurmc has quit IRC | 12:38 | |
Razique | zykes-: wait, I' need to make the production work asap :/ | 12:38 |
Razique | wtf is going :/ | 12:38 |
koolhead11 | Razique: :) | 12:39 |
*** mcclurmc has joined #openstack | 12:39 | |
foexle | koolhead11: network issues, network issues ... ahhhh i forgot .... network issues :D | 12:39 |
Razique | hey koolhead11 | 12:39 |
Razique | man I'm so flipped out | 12:39 |
Razique | the whole site is down since yesterday =D | 12:40 |
Razique | and I'm fricking tired haha | 12:40 |
koolhead11 | Razique: realx :D | 12:40 |
Razique | customers are angry | 12:40 |
foexle | :( oh man Razique .... whats going on ? you cant start instances because you dont get a fiexed ip ? | 12:40 |
Razique | yup | 12:40 |
Razique | the instances don't get fixed ips yah | 12:41 |
Razique | while it use to work | 12:41 |
foexle | hmmm are enough free ip's in pool ? | 12:41 |
Razique | it's a /24 network | 12:42 |
foexle | yeah do you have looked in your database if you have enough free ip's ? | 12:43 |
Razique | should the fixed ips be attached to a network | 12:43 |
*** Arminder has quit IRC | 12:43 | |
*** supriya has joined #openstack | 12:43 | |
Razique | or should they have "NULL" as a network | 12:43 |
foexle | sec | 12:43 |
Razique | in order to make dnsmasq to use them | 12:43 |
foexle | network_id: 1 | 12:44 |
foexle | yeah | 12:44 |
foexle | you need to have a network id | 12:44 |
Razique | so suppose I've 200 fixed ips | 12:46 |
Razique | while only 5 belong to a network | 12:46 |
Razique | does that mean the 6th instance won't get the fixed ip ? | 12:46 |
foexle | yeah i think so | 12:46 |
foexle | so it looks like the fixed ip dont have an auto allocation so you need to allocate fixed_ips to a network manually | 12:47 |
foexle | i'm using for each project /24 network | 12:47 |
Razique | ok | 12:48 |
Razique | what is the iptable rules used for the dhco | 12:48 |
Razique | i'll check on a node | 12:48 |
*** cloudgeek has quit IRC | 12:48 | |
Razique | ok i've checked iptables | 12:49 |
Razique | looks like on the node the dhcp rule doesn't exist | 12:49 |
Razique | foexle: <nova-cc1:root> [09-12 13:50] ~ # cat /var/lib/nova/networks/nova-br100.conf | 12:50 |
Razique | 02:16:3e:5d:78:3a,i-00000088.novalocal,10.0.1.6 | 12:50 |
foexle | so i'm using vlan .... there is no route | 12:50 |
Razique | foexle: ok so do I here | 12:50 |
Razique | br100 and vlan100 between the nodes and nova network | 12:51 |
Razique | and no multi_host | 12:51 |
foexle | yeah .... but you dont need a route for dhcp | 12:51 |
Razique | oh ok | 12:51 |
Razique | thanks | 12:52 |
foexle | take a look in your syslog | 12:52 |
foexle | you will see (i hope :D ) your dhcp requests | 12:52 |
Razique | Dec 9 13:41:37 nova-cn3 dnsmasq-dhcp[1136]: DHCP packet received on br100 which has no address | 12:52 |
foexle | oh | 12:52 |
livemoon | exit | 12:53 |
livemoon | sorry | 12:53 |
foexle | livemoon: :D | 12:53 |
livemoon | foexle: hi | 12:53 |
foexle | Razique: ip addr show .... | 12:53 |
foexle | hey livemoon :> | 12:53 |
*** guigui has quit IRC | 12:53 | |
livemoon | these days I write a little programm to collect instance info and monitor it disk and network | 12:54 |
Razique | foexle: http://paste.openstack.org/show/3714/ | 12:54 |
Razique | ok so that is the issue then | 12:54 |
foexle | Razique: do you know who the maintainer is of nova-network ? | 12:54 |
Razique | i think it soren | 12:55 |
Razique | but not sure | 12:55 |
foexle | kk | 12:55 |
foexle | yeah your brdige need a fixed_ip | 12:55 |
Razique | does ur bridge have a fixed ip ? | 12:56 |
Razique | how do u set that ? | 12:56 |
foexle | automaticcly | 12:56 |
*** supriya has quit IRC | 12:56 | |
foexle | do you have set bidge_interface and bridge in your network table ? | 12:56 |
Razique | foexle: in fact because I've an heterogeneous network | 12:57 |
*** jeremy has quit IRC | 12:57 | |
Razique | I don't ue it, it's the bug we identified with vidd-away | 12:57 |
Razique | virsh | 12:57 |
Razique | so into linux_net.py | 12:57 |
foexle | oh really ? .... i've no problems | 12:58 |
Razique | foexle: on the node br1008000.1cc1dee64e02novlan100 | 12:58 |
Razique | oups | 12:58 |
Razique | http://paste.openstack.org/show/3715/ | 12:59 |
foexle | looks good | 12:59 |
Razique | does that look like an expected behaviour | 12:59 |
*** fridim_ has quit IRC | 13:00 | |
*** hugokuo has quit IRC | 13:00 | |
*** osier has quit IRC | 13:01 | |
*** nerens has quit IRC | 13:03 | |
*** aliguori has joined #openstack | 13:03 | |
*** cloudgeek has joined #openstack | 13:04 | |
*** jaypipes has quit IRC | 13:05 | |
*** nerens has joined #openstack | 13:06 | |
*** rustam has quit IRC | 13:07 | |
Razique | foexle: into fixed_ips database | 13:10 |
*** guigui1 has joined #openstack | 13:10 | |
foexle | Razique: ? | 13:11 |
Razique | should I update the network_id field ? | 13:11 |
foexle | yeah | 13:11 |
Razique | how to clear the table | 13:11 |
Tristani3D | Hi all | 13:11 |
Razique | allocated : 0 leased : 0 reserved untcouhed | 13:11 |
Tristani3D | I am receiving "Unable to get service info: 'NoneType' object has no attribute 'makefile'" in Nova Dashboard | 13:11 |
Razique | network_id : 8 | 13:11 |
foexle | hmmm | 13:11 |
Razique | and also virtual_interface_id : NULL | 13:12 |
Razique | foexle: would you check for me into ur database ? | 13:12 |
foexle | yeah thats correct | 13:12 |
Razique | ok thanks | 13:12 |
Tristani3D | Anyone an idea? | 13:13 |
Razique | foexle: virtual_interfaces table | 13:13 |
Razique | should it be empty I not any ip was leased ? | 13:13 |
Razique | or is it linked to running instances ? | 13:14 |
Razique | looks so | 13:14 |
Razique | :p | 13:14 |
foexle | hmmm hmm good question .... | 13:15 |
foexle | so i think only running instances | 13:15 |
*** yshh has joined #openstack | 13:17 | |
foexle | but i see in my table old instances .... but i think i've inconsistency in this table whyever.... | 13:17 |
Razique | always the same here 2011-12-09 14:18:25,752 DEBUG nova.utils [-] Attempting to grab semaphore "get_dhcp" for method "_get_dhcp_ip"... from (pid=14208) inner /usr/lib/python2.6/dist-packages/nova/utils.py:672 | 13:19 |
Razique | soren: herE ? | 13:19 |
soren | Razique: Yes? | 13:19 |
Razique | would you help me ? big dhcp issue here | 13:19 |
Razique | my instances don't retrive an IP adress, while the vlan and br are up | 13:20 |
soren | What's the problem? | 13:20 |
soren | Ok. | 13:20 |
soren | Does it try and just fail? | 13:20 |
soren | OR does it never try? | 13:20 |
Razique | I see that on the node : Dec 9 14:18:47 nova-cn3 dnsmasq-dhcp[1136]: DHCP packet received on br100 which has no address | 13:20 |
Razique | the instance tries and fail | 13:20 |
Razique | and into nova-network : 2011-12-09 14:18:25,752 DEBUG nova.utils [-] Attempting to grab semaphore "get_dhcp" for method "_get_dhcp_ip"... from (pid=14208) inner /usr/lib/python2.6/dist-packages/nova/utils.py:672 | 13:20 |
soren | Is that from you network host? | 13:20 |
*** livemoon has quit IRC | 13:20 | |
Razique | yup | 13:20 |
Razique | the last one comes from nova-network host | 13:21 |
Razique | while the br100 message belongs to the compute node which runs the instance | 13:21 |
soren | Look in the network hosts's syslog for stuff from dnsmasq. | 13:21 |
Razique | it's trying to read the lease file http://paste.openstack.org/show/3716/ | 13:22 |
Razique | into the file 02:16:3e:08:1f:25,server-252.novalocal,10.0.1.4 | 13:22 |
soren | But it doesn't seem to be getting any requests. | 13:22 |
soren | Did you fiddle around with iptables at all? | 13:23 |
Razique | the comptude node does via it's brige | 13:23 |
Razique | while the instance itself is unable to reach the server | 13:23 |
Razique | soren: how ? | 13:23 |
soren | How what? | 13:24 |
Razique | ok I flused and restarted the rule | 13:24 |
Razique | but sill that same issue : DHCP packet received on br100 which has no address | 13:24 |
soren | What does that mean? | 13:24 |
Razique | well I don't know | 13:26 |
Razique | it prevents instances from getting their ip | 13:26 |
Kiall | Razique: is there by any chance an extra dnsmasq running? | 13:26 |
soren | What do you mean " you don't know"? It was you who said it? | 13:26 |
soren | What does "I flused and restarted the rule" mean? | 13:26 |
Kiall | On ubuntu, when the dnsmasq package gets installed it its init script needs to be disabled so OS can control it... | 13:27 |
soren | Kiall: No. | 13:27 |
Razique | there was yes | 13:27 |
Razique | I killed it, and i'm trying to do a clean respawn | 13:27 |
*** yamahata_ has joined #openstack | 13:27 | |
soren | Kiall: You just don't install "dnsmasq". You install dnsmasq-base. That's why I added it. | 13:27 |
Kiall | soren: ah! | 13:27 |
* Kiall updates his packages ;) | 13:27 | |
Kiall | Anyway - I've seen a rouge dnsmasq cause that "DHCP packet received on br100 which has no address" message in syslog... | 13:28 |
soren | Why would it matter? | 13:29 |
soren | It just says "hey, there's something here I can't handle". | 13:29 |
soren | Or, paraphrasing: "meh" | 13:29 |
Razique | soren: I think everything is related to that semaphore grabbing stuff | 13:29 |
Razique | from nova-network | 13:30 |
Kiall | The init script managed dnsmasq binds to all interfaces and prevents the correct dnsmasq instances from receiving the requests.. | 13:30 |
soren | Razique: Ok. | 13:30 |
soren | Kiall: If it's on the compute node, I don't see how it matters. | 13:30 |
soren | Kiall: It can't "steal" the dhcp packets and prevent them from reaching the network host. | 13:30 |
soren | afaik | 13:30 |
soren | I can't see how it would. | 13:31 |
Kiall | Ah - No i dont believe it can.. Unless you use multi-node networking.. | 13:31 |
soren | WEll, it still can't steal it and prevent it from reaching itself :) | 13:32 |
soren | Razique: Why? | 13:32 |
soren | Razique: Does the leases file not look correct? | 13:32 |
Razique | soren: it does 02:16:3e:02:ba:7d,server-255.novalocal,10.0.1.3 | 13:34 |
soren | Razique: And it clearly restarts dnsmasq. | 13:35 |
soren | Razique: So why do you think it's got to do with locking in nova? | 13:35 |
Razique | because every time the instance try to retrieve it's ip, we see the locking message | 13:36 |
soren | Err.. | 13:37 |
soren | Yes. | 13:37 |
soren | Of course. | 13:37 |
soren | Because every time, it needs to grab a lock to make stuff happen. | 13:37 |
soren | And if the IP gets into the leases file and dnsmasq gets restarted, it sounds like it goes pretty well. | 13:38 |
*** PeteDaGuru has joined #openstack | 13:39 | |
Razique | any hint about why while everything works well (from a "processes" point) and in the end my instance cannot reach the ip server ? | 13:39 |
Razique | mmm interesting http://paste.openstack.org/show/3717/ | 13:41 |
Razique | 0.0B | 13:41 |
Razique | ok even more weirder | 13:41 |
Razique | http://paste.openstack.org/show/3718/ | 13:42 |
Razique | the rule doesn't grab any packet | 13:42 |
*** dprince has joined #openstack | 13:42 | |
*** nati2 has joined #openstack | 13:42 | |
*** po has joined #openstack | 13:42 | |
soren | Razique: Well, I still haven't understood what you meant by: 13:24 < Razique> ok I flused and restarted the rule | 13:44 |
soren | Razique: So I don't know what we've already tried. | 13:44 |
Razique | soren: ok sorry, I wanted to say that I flushed all iptables rules, and restarted nova-network | 13:45 |
Razique | I've also terminated the instance and spawned a new one | 13:45 |
soren | Razique: how did you do that? | 13:46 |
*** bcwaldon has joined #openstack | 13:46 | |
Razique | which step are u reffering to soren ? | 13:46 |
Razique | the iptables flush ? | 13:46 |
*** Tristani3D has quit IRC | 13:46 | |
Razique | I did iptables -F -t nat && iptables -F | 13:47 |
soren | Ok. | 13:48 |
soren | Did you touch ebtables at all? | 13:48 |
Razique | soren: nope I didn't | 13:48 |
Razique | meanwhile I've also noticed an apparmor message http://paste.openstack.org/show/3719/ | 13:48 |
soren | Ok. | 13:48 |
soren | I don't think that's a problem. | 13:49 |
Razique | maybe a link | 13:49 |
Razique | oh ok | 13:49 |
Razique | :) | 13:49 |
soren | I think this is a networking thing. Can you try tcpdump'ing on the network host and see what sort of dhcp traffic comes in? | 13:49 |
*** jeremy has joined #openstack | 13:50 | |
Razique | what would be the interface to listen on ? | 13:51 |
soren | Just listen on all of them. | 13:51 |
soren | -i any | 13:51 |
*** rsampaio has joined #openstack | 13:51 | |
Razique | while the instance asks for an ip | 13:53 |
Razique | from nova-network, ther is no udp traffix | 13:53 |
Razique | traffic | 13:53 |
soren | Ok. | 13:53 |
soren | Now try on the compute node. | 13:54 |
soren | Just for good measure. | 13:54 |
*** ahasenack has joined #openstack | 13:54 | |
Razique | it's more talkative http://paste.openstack.org/show/3721/ | 13:57 |
Razique | looks like the instance doesn't get the answer back | 13:57 |
*** lborda has joined #openstack | 13:58 | |
*** rustam has joined #openstack | 13:59 | |
*** hadrian has joined #openstack | 14:00 | |
*** judd7 has joined #openstack | 14:02 | |
*** voxfiles has joined #openstack | 14:02 | |
*** mattray has joined #openstack | 14:02 | |
Razique | i'm desperate :( | 14:03 |
cloudgeek | Razique: we help you | 14:03 |
Razique | yah I appreciate | 14:03 |
Razique | i've been stuck on that for hours | 14:03 |
Razique | while it always worked | 14:04 |
cloudgeek | Razique: it can happen with anyone ! so we are with you | 14:04 |
foexle | Razique: still the same problem ? | 14:05 |
foexle | Razique: or other one | 14:05 |
Razique | yah totally | 14:05 |
foexle | hmm hmmm | 14:05 |
Razique | unable to retrive an ip from dhcp server | 14:05 |
foexle | do you see any errors or warnings in your logs ? | 14:06 |
*** cereal_bars has joined #openstack | 14:06 | |
Razique | nope | 14:06 |
Razique | that the thing | 14:06 |
Razique | :( | 14:06 |
foexle | can you paste `ps aux | grep dns` on compute | 14:06 |
Razique | nothing | 14:07 |
Razique | root 21294 0.0 0.0 6160 692 pts/0 S+ 15:07 0:00 grep --color=auto dns | 14:07 |
foexle | aha | 14:07 |
foexle | so your nova-network wont to start dnsmasq | 14:07 |
uvirtbot | New bug: #902162 in openstack-qa "500 error with depilated key pairs" [Undecided,New] https://launchpad.net/bugs/902162 | 14:08 |
*** donald has joined #openstack | 14:08 | |
foexle | Razique: can you paste your nova.conf ? | 14:08 |
*** donald is now known as Guest80782 | 14:08 | |
Razique | foexle: http://paste.openstack.org/show/3723/ | 14:12 |
soren | Razique: I don't think it's that the instance doesn't get its response. I think the request never reaches the dhcp server. | 14:16 |
soren | Razique: What's the IP of the DHCP server? | 14:16 |
Razique | 172.16.40.245 | 14:16 |
*** mjfork has joined #openstack | 14:16 | |
Razique | and node 172.16.40.242 | 14:16 |
Razique | dhcp net : 10.0.10.0/24 | 14:16 |
soren | Razique: Can you give me the output of "sudo virsh nwfilter-list" on the compute node? | 14:17 |
soren | Razique: No, wait. | 14:17 |
soren | Even better: | 14:17 |
soren | Razique: First, "sudo virsh list" | 14:18 |
Razique | soren: v | 14:18 |
Razique | http://paste.openstack.org/show/3724/ | 14:18 |
Razique | I'm on VNC on the instance in fact | 14:18 |
soren | Lovely. | 14:18 |
Razique | so we have dhcprequest | 14:19 |
Razique | that don't end | 14:19 |
soren | Now: "sudo virsh nwfilter-dumpxml nova-instance-instance-00000106" | 14:19 |
soren | (yes, "instance-" twice) | 14:19 |
*** troytoman-away is now known as troytoman | 14:19 | |
Razique | http://paste.openstack.org/show/3725/ | 14:19 |
Razique | hehe | 14:19 |
soren | Ok, try: sudo virsh dumpxml instance-00000106 | 14:20 |
soren | That should probably suffice anyway. | 14:20 |
foexle | hmm razique | 14:20 |
Razique | soren: http://paste.openstack.org/show/3726/ | 14:20 |
Razique | foexle: yup | 14:20 |
foexle | you dnsmasq handle the c-net 10.0.1.x | 14:20 |
Razique | i'm about to explode ahah | 14:20 |
foexle | is that correct ? | 14:20 |
Razique | c-net ? | 14:20 |
foexle | net class c ;) | 14:21 |
Razique | ah yes | 14:21 |
Razique | the leased ips are within that range yes | 14:21 |
foexle | ok | 14:21 |
soren | Razique: brctl show br100 | 14:22 |
Razique | soren: http://paste.openstack.org/show/3727/ | 14:22 |
foexle | and you dont need to setup in nova-conf your interfaces .... it will be overritten by the settings in db | 14:22 |
Razique | the bridge_interface one ? | 14:22 |
foexle | yeah and vlan | 14:22 |
Razique | yah | 14:23 |
Razique | that's the new stuff into diablo | 14:23 |
Razique | that's why I had to use a workaround atm | 14:23 |
Razique | i've updated the linux_net.py | 14:23 |
soren | Razique: If you tcpdump -i vlan100, do you see the dhcp traffic? | 14:23 |
Razique | so vconfig doesn't use the database value | 14:24 |
soren | Razique: Did this used to work? Or did it never work? | 14:24 |
Razique | soren: woked perfeclty until this morning | 14:24 |
Razique | after the reboot | 14:24 |
soren | Wai, wait, wait... | 14:24 |
soren | You've updated linux_net.py? | 14:24 |
Razique | soren: tcpdump: WARNING: vlan100: no IPv4 address assigned | 14:25 |
soren | You've updated linux_net.py? | 14:25 |
Razique | soren: I replaced a line | 14:25 |
Razique | let me getit | 14:25 |
*** lloydde has joined #openstack | 14:25 | |
Razique | soren: http://paste.openstack.org/show/3728/ | 14:26 |
Razique | line 8 | 14:26 |
Razique | instead of retrieving the value from the database | 14:26 |
soren | ?!??! | 14:26 |
soren | You really could have mentioned that earlier. | 14:27 |
soren | Why don't you want it to use the value from the db? | 14:27 |
soren | Let me rephrase: | 14:27 |
soren | If it worked, why did you fix it? | 14:27 |
soren | Let me rephrase: | 14:27 |
soren | If it worked, why did you "fix" it? | 14:27 |
Razique | soren: that is what I said to foexle if you only use one nova-network (not multi_host) and don't use the same interface for every node | 14:28 |
Razique | without that "fix", the vlan didn't even wanted to up | 14:28 |
Razique | I didn't updated it today though | 14:28 |
*** crescendo has quit IRC | 14:28 | |
*** crescendo has joined #openstack | 14:30 | |
*** po has quit IRC | 14:31 | |
soren | I see. | 14:31 |
soren | What did you change today? | 14:31 |
*** rsampaio has quit IRC | 14:32 | |
*** crescendo has quit IRC | 14:34 | |
Razique | the nodes rebooted | 14:35 |
Razique | but nothing was changed | 14:35 |
*** crescendo has joined #openstack | 14:35 | |
Razique | in fact this is a first time the whole site reboots since a live uppgrade catus -> diablo | 14:36 |
mjfork | Razique: late to the party - what is problem? something about DHCP to 2nd node? | 14:36 |
Razique | mjfork: the instance doesn't retrive it's ip from the dhcp server | 14:37 |
Razique | i don't use mutli_host | 14:37 |
*** kbringard has joined #openstack | 14:37 | |
mjfork | did you lose a sysctl setting, perhaps rp_filter? | 14:38 |
*** Arminder has joined #openstack | 14:39 | |
*** lloydde has quit IRC | 14:40 | |
Razique | that setting doesn't ring me a bell | 14:40 |
Razique | it's commented into sysctl.conf | 14:40 |
*** vizsla_p has joined #openstack | 14:41 | |
Razique | that doesn't make any sens | 14:41 |
Razique | grrr | 14:41 |
foexle | Razique: cat /proc/sys/net/ipv4/ip_forward | 14:41 |
*** kbringard has quit IRC | 14:42 | |
Razique | 1 | 14:42 |
foexle | hmpf | 14:42 |
Razique | on both servers | 14:42 |
*** kbringard has joined #openstack | 14:42 | |
foexle | strange | 14:42 |
*** robbiew has joined #openstack | 14:43 | |
mjfork | my rp_filter is 2 | 14:44 |
Razique | foexle: what about you ? | 14:45 |
uvirtbot | New bug: #902175 in nova "nova-manage network delete fails with QuantumManager" [Undecided,New] https://launchpad.net/bugs/902175 | 14:45 |
foexle | which interface ? | 14:47 |
Razique | mjfork: it's a sysctl setting right ? | 14:48 |
foexle | on each interface "1" | 14:48 |
Razique | mjfork: on nova-network and the nodes ? | 14:49 |
Razique | or only the nova-network server ? | 14:49 |
mjfork | i set it on both | 14:49 |
*** po has joined #openstack | 14:50 | |
Razique | mmm doesn't make any difference | 14:51 |
Razique | still dhcpdiscover without success | 14:51 |
*** hggdh has quit IRC | 14:51 | |
*** hggdh has joined #openstack | 14:52 | |
*** po has quit IRC | 14:52 | |
Razique | foexle: any routing rule on the node ? | 14:52 |
*** pradeep has quit IRC | 14:52 | |
Razique | I mean for the dhcp | 14:52 |
*** hggdh has quit IRC | 14:52 | |
foexle | nope .... but my network manager runs on each node | 14:53 |
Razique | ah ok | 14:53 |
*** hggdh has joined #openstack | 14:53 | |
mjfork | so you have compute nodes that can't get DHCP from controller | 14:53 |
mjfork | controlelr is the only one running nova-network, it is not running anywhere else | 14:54 |
*** po has joined #openstack | 14:57 | |
*** mchenetz has joined #openstack | 14:58 | |
Razique | exactly | 14:58 |
Razique | meanwhile new thing | 14:58 |
Razique | nope | 14:59 |
*** hadrian has quit IRC | 15:00 | |
*** Ruetobas has quit IRC | 15:00 | |
*** Ruetobas has joined #openstack | 15:01 | |
mjfork | and you said some compute nodes are on different interfaces on teh controller? | 15:01 |
*** TiMMay333 has joined #openstack | 15:01 | |
Razique | yah | 15:02 |
mjfork | are all nodes broken or some | 15:02 |
Razique | the controller uses eth1 for vlan | 15:02 |
Razique | the nodes eth2, et0 etc... | 15:02 |
Razique | yup all | 15:02 |
TiMMay333 | Hello all! | 15:03 |
*** lloydde has joined #openstack | 15:03 | |
TiMMay333 | So, im looking into joinging the cloud provider fray, but we are a heavy VMware house.. has anyone had experience with conencting openstack to vSphere? is it worth it? | 15:03 |
mjfork | Razique: using tcpdump, do you see your DHCP request? | 15:05 |
*** bcwaldon has quit IRC | 15:05 | |
*** guigui1 has quit IRC | 15:07 | |
mjfork | TiMMay333: have you seen http://nova.openstack.org/vmwareapi_readme.html | 15:09 |
*** guigui has joined #openstack | 15:09 | |
Razique | mjfork: only in one wy | 15:12 |
Razique | way | 15:12 |
*** lorin1 has joined #openstack | 15:13 | |
mjfork | only seeing DHCP on 1 interface | 15:14 |
*** hadrian has joined #openstack | 15:15 | |
stevegjacobs | Hi | 15:15 |
TiMMay333 | mjfork: ya i glaced at it, from what i understand is that it talks directly to ESXi, and only uses local storage? So i can create an infrastructure that leverages openstack as my cloud operations dashboard, and leverage vSphere for High availability, vmotion, DRS scheduling, etc? | 15:19 |
TiMMay333 | i can't * | 15:19 |
mjfork | TiMMay333: that is my understanding, you cannot, if no one else chimes in I would suggest the general mailing lsit to confrim | 15:20 |
TiMMay333 | its a shame, because I could really see myself using this instead of vCloud Director 1.5... | 15:21 |
*** bcwaldon has joined #openstack | 15:23 | |
*** kbringard has quit IRC | 15:23 | |
*** lloydde has quit IRC | 15:24 | |
Razique | mjfork: soren foexle I found out | 15:24 |
*** kbringard has joined #openstack | 15:24 | |
*** rnirmal has joined #openstack | 15:25 | |
Razique | faulty physical switch | 15:25 |
Razique | it 's a non administrable one, simple switch | 15:25 |
Razique | that was blocking vlan | 15:25 |
mjfork | TiMMay333: what features of Openstack are you looking for if you still want HA/vMotion/DRS/etc | 15:25 |
*** po has quit IRC | 15:25 | |
stevegjacobs | I've been using Openstack Dashboard to launch instances,but now they seem to be stuck on 'Build' | 15:26 |
stevegjacobs | what could cause that | 15:26 |
stevegjacobs | ? | 15:26 |
*** dspano has joined #openstack | 15:26 | |
*** NashTrash1 has joined #openstack | 15:26 | |
kbringard | probably an error in the scheduler or the compute node | 15:27 |
NashTrash1 | Good morning Openstack'ers | 15:27 |
kbringard | I'd check /var/log/nova/nova-scheduler.log on your controller | 15:27 |
*** imsplitbit has joined #openstack | 15:27 | |
NashTrash1 | I am on Swift 1.4.2 and want to upgrade to 1.4.4. Are there docs covering upgrade steps? | 15:27 |
kbringard | and that'll likely either have an error, or it'll tell you which compute node the VM was cast to | 15:27 |
stevegjacobs | ok - thanks kbringard! | 15:27 |
kbringard | then you hop on the compute node it was cast to, and check /var/log/nova/nova-compute.log | 15:27 |
kbringard | that's where I'd personally start | 15:27 |
*** Arminder has quit IRC | 15:28 | |
*** Arminder has joined #openstack | 15:28 | |
foexle | Razique: normaly dont blocks an unmanaged switch any vlans | 15:28 |
foexle | because unmanaged switches do not check any vlan tags | 15:28 |
*** troytoman is now known as troytoman-away | 15:29 | |
TiMMay333 | mjfork: Im Currently researching ways to offer cloud services using our current virtual infrastructure, obviously vCloud director is one of those choices, but i find it really hooks into the way the virtualization/networking structure, also i hate vShield Edge. also down the road to offer different hypervisors for different tiered solutions. using the vSphere would remove another learning curve when it comes the the hypervisor, | 15:29 |
soren | Razique: Awesome. | 15:30 |
*** shaon has joined #openstack | 15:31 | |
*** dubsquared has joined #openstack | 15:32 | |
foexle | soren: you are the network pro in openstack right ? :) | 15:35 |
*** ldlework has joined #openstack | 15:35 | |
mjfork | TiMMay333: what do you define as "cloud services" just the end user slef service? | 15:36 |
*** Rajaram has quit IRC | 15:37 | |
TiMMay333 | mfork: Very good question! end user self service would be one of those services.. | 15:39 |
mjfork | TiMMay333: any other services in mind? | 15:41 |
TiMMay333 | elastic computing services as well, and probably a managed service too... do you have something in mind? | 15:42 |
mjfork | no, just trying to understand what you were interested in OpenStack for - my guess is you don't want to pay the cost of VMware? | 15:43 |
*** GheRivero_ has joined #openstack | 15:43 | |
TiMMay333 | mjfork: the cost is something, but not the primary reason, have you ever tried vCloud Director 1.5 ? | 15:44 |
mjfork | TiMMay333: nope :-) | 15:44 |
*** onlinegangster has joined #openstack | 15:45 | |
*** new2stack has joined #openstack | 15:45 | |
TiMMay333 | mjfork: ya, lol, because vcloud really hooks into the environment, and already ive been getting database issues with it.. | 15:46 |
mjfork | TiMMay333: have you considered running your cloud stack on KVM? | 15:46 |
TiMMay333 | mjfork: and doesnt really open the possibility of using other hypervisors (to offer a cheaper solution for clients who dont want to pay the vmware tax) | 15:46 |
TiMMay333 | i havent gotten into KVM yet.. | 15:47 |
mjfork | start with KVM + OpenStack to test it | 15:47 |
mjfork | i think you will be surprised | 15:47 |
mjfork | I would expect it would be cheaper in pricate than your vmware hosting and be targeted t different workloads | 15:47 |
*** code_franco has joined #openstack | 15:48 | |
TiMMay333 | mjfork: ya? ill take a look at it.. but i mean vmware from my experience is generally good at High Availabilities of VMs and the performance has been great | 15:48 |
mjfork | TiMMay333: i agree, but thats where the use cases for OpenStack differ from VMware | 15:50 |
mjfork | vmware is - keep the VMs up all the time | 15:50 |
mjfork | openstaack is - hardware fails, vms fail, deal with ti | 15:50 |
*** cp16net has joined #openstack | 15:50 | |
*** stevegjacobs has left #openstack | 15:51 | |
*** dragondm_ has joined #openstack | 15:51 | |
new2stack | I've bene trying to follow the quick start guide but I'm stuck connecting server 2. I've triple checked I set things up as documented and tested that I can access the various ports on server 1 from server 2. But it's now showing up when i run availability-zones | 15:51 |
*** hugokuo has joined #openstack | 15:51 | |
TiMMay333 | mjfork: that is probably the most honest answer ive gotten, thank you very much for that! | 15:51 |
*** dragondm_ has quit IRC | 15:51 | |
*** dragondm has joined #openstack | 15:52 | |
TiMMay333 | mjfork: but i will try KVM + openstack as another solution. | 15:52 |
mjfork | you should certainly evaluate it as a cloud platofrm | 15:53 |
mjfork | if you are targeting the right workloads, it is ideal | 15:53 |
new2stack | I've been trying to follow the quick start guide but I'm stuck connecting server 2. I've triple checked I set things up as documented and tested that I can access the various ports on server 1 from server 2. But it's not showing up when i run availability-zones. Any pointers? I'm not really sure how to go about testing it since I'm not sure how server 2 gets authenticated to server 1 | 15:54 |
TiMMay333 | mjfork: what would be the "right workloads" ? | 15:54 |
mjfork | typically, workloads that can withsatnd a node or vm failure | 15:54 |
mjfork | things like dev/test and scale out workloads (think analytics/hadoop and web servers) | 15:54 |
TiMMay333 | mjfork: Right, instead of having the virtualization layer do the availability, make the workloads deal with it.. correct? | 15:55 |
mjfork | yes, those are teh ideal candidates | 15:55 |
*** rnorwood has joined #openstack | 15:55 | |
mjfork | going forward that may change, but today those are the core ones | 15:56 |
TiMMay333 | mjfork: that makes complete sense! | 15:56 |
TiMMay333 | mjfork: thank you soo much for that valuable insight | 15:56 |
mjfork | np | 15:56 |
*** jaypipes has joined #openstack | 15:57 | |
*** rods has quit IRC | 15:57 | |
new2stack | Can someone explain how the servers communicate to each other so I can get server 2 talking to server 1 please | 16:00 |
*** adjohn has joined #openstack | 16:01 | |
*** magg has joined #openstack | 16:05 | |
magg | hello | 16:05 |
magg | how do i test swift authentication with keystone | 16:05 |
magg | help plz | 16:05 |
*** mindpixel has quit IRC | 16:06 | |
new2stack | can someone please help get this compute node connected | 16:07 |
mjfork | new2stack: http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-multiple-compute-nodes.html | 16:07 |
*** martines has quit IRC | 16:07 | |
*** martines has joined #openstack | 16:08 | |
*** nati2 has quit IRC | 16:08 | |
foexle | new2stack: whats your problem exactly ? | 16:09 |
*** freeflyi1g has joined #openstack | 16:10 | |
*** stevegjacobs has joined #openstack | 16:10 | |
new2stack | Thanks mjfork. The only difference from my nova.conf is the ec2_api & ec2_url. What do those do? and I assume those need to be set on server 1 as well | 16:11 |
*** rocambol1 has quit IRC | 16:11 | |
mjfork | tells the system where to find some services needed | 16:11 |
mjfork | but, back to what foexle said - what is not working? | 16:12 |
new2stack | I'm sorry I can't be more precise, but server 2 (compute node) isn't registering with server 1 (controller). I know it can access the database. | 16:12 |
*** TheOsprey has quit IRC | 16:12 | |
*** freeflying has quit IRC | 16:13 | |
gnu111 | Hi all...I am trying to open a port in my instance. I did this "iptables -I INPUT 1 -p tcp--dport 8080 -j ACCEPT" but now when I telnet from outside using the "public_ip address 8080", I don't get anything. | 16:13 |
*** afm has quit IRC | 16:14 | |
stevegjacobs | results of nova-compute.log : http://paste.openstack.org/show/3730/ | 16:15 |
new2stack | I know this is a stupid question, but whats the difference between --flat_network_bridge=br100 and --vlan_interface=br100? | 16:15 |
stevegjacobs | This follows launching an instance from openstack-dashboard | 16:16 |
mjfork | new2stack: did you start nova-compute on host2? what does nova-manage server list show | 16:16 |
Razique | mmmm better | 16:17 |
Razique | the instances get dhcp | 16:17 |
Razique | but can't reach the metadata server | 16:17 |
Razique | and I can't reach the instances | 16:17 |
mjfork | stevegjacobs: are you out of IPs to assign? | 16:17 |
mjfork | gnu111: can you open it with the euca-authorize commands? | 16:18 |
stevegjacobs | hmm - shouldn't be but I'll check! | 16:18 |
gnu111 | mjfork: ah! let me try it. | 16:18 |
foexle | Razique: i've found the problem with metadata but i dont have a solution only a workaround | 16:19 |
*** afm has joined #openstack | 16:21 | |
gnu111 | mjfork: I tried this. euca-authorize -P tcp -p 8080 -s 0.0.0.0/0 default. But still I get connection refused when I telnet to 8080. | 16:21 |
mjfork | Razique: what is ec2 url on compute nodea | 16:21 |
*** reidrac has left #openstack | 16:21 | |
*** maploin has joined #openstack | 16:21 | |
mjfork | but you can SSH to it? | 16:22 |
new2stack | Here is my nova.conf files and the output of euca_describe-availability-zones verbose http://paste.openstack.org/show/3731/ | 16:23 |
foexle | gnu111: what says euca-describe-groups ? | 16:23 |
*** gavri1 has left #openstack | 16:23 | |
mjfork | new2stack: show nova-manage service list | 16:23 |
gnu111 | foexle: PERMISSIONmyprojectdefaultALLOWStcp80808080FROMCIDR0.0.0.0/0 | 16:24 |
mjfork | gnu111: is your server in that project? | 16:25 |
new2stack | Nova error: manage doesn't have a service list option | 16:25 |
mjfork | nova-manage not nova manage | 16:25 |
foexle | hmmmm do you have reboot the instance ? because i'dont know if the iptables rules will set on the fly | 16:25 |
kbringard | they set on the fly | 16:25 |
foexle | gnu111: wich openstack version? | 16:26 |
foexle | kbringard: ok :D | 16:26 |
uvirtbot | New bug: #902218 in nova "overLimit fault does not inform client of actual limit exceeded" [Undecided,In progress] https://launchpad.net/bugs/902218 | 16:26 |
gnu111 | mjfork: I ran the instance as an admin for the project. I am using Cactus. | 16:26 |
kbringard | unless you're running an old and broken or bugged version that I'm unaware of | 16:26 |
foexle | kbringard: an hi :) | 16:26 |
kbringard | which is certainly possible :-) | 16:26 |
kbringard | herro | 16:26 |
gnu111 | foexle: I am using cactus. | 16:26 |
kbringard | cactus should apply the new rules dynamically | 16:26 |
kbringard | what does the IPtables chain look like for that instance on the compute node? | 16:27 |
foexle | gnu111: oh ... hmm so i don't know ... i'm using diablo sry | 16:27 |
foexle | gnu111: make a tcpdump on your interface on compute node and look if a connection comes in with this port | 16:28 |
new2stack | oops, that would do it haha. Says it's happy, but can't see the compute node (csn1) http://paste.openstack.org/show/3732/ | 16:28 |
mjfork | new2stack: you only have 1 host running, csm1 | 16:28 |
gnu111 | foexle: ok. I will try that. | 16:28 |
*** hadrian has quit IRC | 16:28 | |
mjfork | new2stack: your 2nd host never registered | 16:28 |
*** guigui has quit IRC | 16:28 | |
gnu111 | in the compute node, under Chain nova-compute-inst-3082: ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 | 16:29 |
mjfork | gnu111: did you verify your guest is actually listening on that port? | 16:29 |
new2stack | I know, I'm trying to get it to. I'm not sure things went in right. CSN1 I have to use init.d to control nova-compute & on CSM1 I have to manually start all services if I restart it | 16:30 |
mjfork | new2stack: so do that on host2? | 16:30 |
gnu111 | mjfork: netstat -tanp does not show 8080. | 16:31 |
mjfork | inside teh guest? | 16:31 |
new2stack | do what on host 2? All the document said to do was install nova-compute & setup the nova.conf file | 16:31 |
foexle | gnu111: from other host: nmap <instance-pub-ip> -p 8080 .... if you see filtred => rules are wrong, you see closed, no response from this port | 16:31 |
gnu111 | mjfork: oh sorry. that was the host. | 16:31 |
mjfork | new2stack: start nova-compute | 16:31 |
mjfork | gnu111: do it inside the guest, if it isn't listening then you will never connect. | 16:32 |
new2stack | it is running, I just can't use service restart nova-compute. I have to use /etc/init.d/nova-compute restart | 16:32 |
gnu111 | mjfork: yes you are right. netstat -tanp does not show 8080 in the guest. | 16:32 |
*** robbiew has quit IRC | 16:33 | |
mjfork | new2stack: ok, then you are fine. check nova-compute.log, it is never registering with the server | 16:33 |
*** nerens has quit IRC | 16:33 | |
*** rsampaio has joined #openstack | 16:33 | |
foexle | gnu111: in the quest=> telnet -tulpen | grep 8080 | 16:34 |
foexle | if you have no entry your service are not running on this port | 16:34 |
magg | how do i test swift authentication with keystone? | 16:35 |
*** pradeep has joined #openstack | 16:35 | |
foexle | magg: thats a good question :D | 16:35 |
magg | lol | 16:35 |
magg | no one knows? | 16:35 |
*** adjohn has quit IRC | 16:35 | |
gnu111 | foexle: i don't have any services right now. I was trying to test how opening and closing ports work in the instance. is that the issue? | 16:36 |
new2stack | do I need to have psycopg2 on the compute-node as well? It says ImportError: no module named psycopg2 | 16:36 |
foexle | gnu111: yeah .... you cant connect to a port there's not used | 16:36 |
*** mattray has quit IRC | 16:37 | |
*** dnjaramba has quit IRC | 16:37 | |
*** lts has joined #openstack | 16:37 | |
gnu111 | foexle: thanks. let me try to start a webserver and see if I can connect to port 80. | 16:38 |
mjfork | new2stack: yes, looks like it | 16:38 |
foexle | gnu111: or you start your webserver on port 8080 ;) | 16:38 |
NashTrash1 | Does any know of docs covering a Swift upgrade? We are going to 1.4.2 to 1.4.4. Thanks. | 16:38 |
new2stack | q | 16:38 |
new2stack | oops, wrong window | 16:38 |
*** krow has joined #openstack | 16:38 | |
gnu111 | foexle: port 80 responds after starting the httpd service. I guess I understand how it works now? | 16:39 |
foexle | you authorize this port with euca-tools | 16:41 |
foexle | then you can check it with telnet, nmap or whatever | 16:41 |
*** otaku2 has joined #openstack | 16:42 | |
new2stack | awesome! It wanted the postgresql-client as well but now it's happy. I know it had to be something small I overlooked. Thanks a lot! :) | 16:42 |
gnu111 | foexle: Yes. that is right. it is working now. | 16:42 |
foexle | :) | 16:42 |
foexle | great | 16:42 |
otaku2 | what's the deal with the diablo rpm repo? there's no openstack-repo rpm for it | 16:42 |
*** hub_cap has joined #openstack | 16:42 | |
foexle | otaku2: http://docs.openstack.org/diablo/openstack-compute/admin/content/installing-openstack-compute-on-rhel6.html | 16:43 |
new2stack | otaku2 -http://yum.griddynamics.net/ | 16:43 |
foexle | i think you can use this repo | 16:43 |
otaku2 | yes, that is the url to the repo | 16:43 |
cloudgeek | Error: Unable to get instance list: The server has either erred or is incapable of performing the requested operation. | 16:44 |
otaku2 | but the document mentions an RPM that will install the repo | 16:44 |
otaku2 | it is 404 | 16:44 |
otaku2 | foexle: if you read that page, try wget http://yum.griddynamics.net/yum/diablo/openstack-repo-2011.3-0.3.noarch.rpm it is 404 | 16:44 |
mjfork | install the Repo manually | 16:44 |
new2stack | I'm working on getting these to work. You need master/deps master/openstack & diablo-centos...if you use EPEL some of the dependancies are newer and it complains | 16:44 |
foexle | otaku2: haha i see oh yeah :) | 16:45 |
otaku2 | I can do that - but why advertise a repo rpm if it doesn't exist :P | 16:45 |
foexle | go to http://yum.griddynamics.net/yum/diablo/ and look at the package names | 16:45 |
otaku2 | is the recommended hypervisor still kvm? | 16:46 |
foexle | no its only default | 16:46 |
new2stack | http://yum.griddynamics.net/yum/master/openstack/openstack-repo-2011.3-0.2.noarch.rpm interesting that it's 3-0.2 and your looking for 3-0.3 | 16:46 |
otaku2 | yeah the 3.0.2 goes to the master repo no the diablo-specific one | 16:46 |
mjfork | otaku2: it is the default as foxele said, but it also has the most functional support | 16:46 |
*** dnjaramba has joined #openstack | 16:47 | |
mjfork | otaku2: go to http://www.cyberciti.biz/tips/rhel5-fedora-core-add-new-yum-repository.html and fullow the How do i insalll manually | 16:47 |
mjfork | i had to do the same thing, there is no repo RPM for diablo I could find | 16:47 |
otaku2 | the documentation cake is a lie | 16:48 |
*** cloudgeek has quit IRC | 16:48 | |
mjfork | otaku2: what do you mean? | 16:48 |
otaku2 | bad joke. basically the documentation references a resource that nobody took the time to create. | 16:48 |
*** cloudgeek has joined #openstack | 16:49 | |
mjfork | otaku2: yeah, agreed. if someone contained GD they may fix it | 16:49 |
*** krow1 has joined #openstack | 16:49 | |
*** nerens has joined #openstack | 16:49 | |
*** hadrian has joined #openstack | 16:49 | |
*** krow has quit IRC | 16:50 | |
otaku2 | and you seriously never heard of the cake is a lie? lol | 16:52 |
uvirtbot | New bug: #902234 in nova "Nova compute servers unable to reach AMQP servers" [Undecided,New] https://launchpad.net/bugs/902234 | 16:53 |
otaku2 | google it. its relevant to this discussion :) | 16:53 |
*** reed has joined #openstack | 16:55 | |
*** TheOsprey has joined #openstack | 16:57 | |
new2stack | whats the difference between nova-volume and swift/object-store? | 16:57 |
*** mattray has joined #openstack | 16:57 | |
*** wilmoore has joined #openstack | 16:58 | |
magg | Kiall u there? | 16:58 |
*** krow1 has quit IRC | 16:59 | |
Kiall | magg: yea | 16:59 |
mjfork | new2stack: nova-volume is block store, swift is object store | 16:59 |
magg | kiall: i have a question about keystone_data.sh | 17:00 |
magg | kiall: what is %tenant_id%? | 17:00 |
stevegjacobs | Hey Kiall, how you doin? | 17:00 |
Kiall | magg: keystone replaces that string with the users tenant id when a service asks for the list of endpoints (other APIs) the user has access to | 17:01 |
Kiall | stevegjacobs: heya, good... having another OS issue? | 17:02 |
magg | kiall: so the NOVA_PROJECT_ID is the tenant right? and also that variable | 17:03 |
*** rnirmal has quit IRC | 17:03 | |
*** vladimir3p has joined #openstack | 17:03 | |
Kiall | yea, lots of variables ;) I'm guessing the part you're not understanding is what exactly keystone endpoints are used for.. | 17:04 |
magg | i guess | 17:05 |
*** hggdh has quit IRC | 17:05 | |
Kiall | eg nova's API lives at "http://server:8774/v1.1/%tenant_id%" | 17:05 |
magg | i see | 17:05 |
Kiall | so a person in tenant ID 54321 who wants to boot an instance would make an API request to "http://server:8774/v1.1/54321/boot" | 17:06 |
Kiall | while someone in tenant id 12345 would call "http://server:8774/v1.1/12345/boot" | 17:06 |
stevegjacobs | Kiall: Yeah - I created a snapshot of a server but when I try to launch it, it stays stuck in 'Build' | 17:06 |
*** llang629 has joined #openstack | 17:07 | |
Kiall | stevegjacobs: any idea what step it fails at? | 17:07 |
magg | aah i see now... hehe thanks | 17:07 |
*** maplebed has joined #openstack | 17:07 | |
Kiall | magg: no problem... | 17:07 |
otaku2 | so does openstack have something that manages san storage so it can be shared between compute nodes? | 17:07 |
*** katkee has quit IRC | 17:07 | |
foexle | otaku2: you can use nova-volume = iscsi targets or obejectstore (swift) it's like S3 | 17:07 |
stevegjacobs | not sure - I didn't see any failure notice in log | 17:07 |
stevegjacobs | I think I'll terminate and relaunch it | 17:08 |
*** hggdh has joined #openstack | 17:08 | |
otaku2 | we dont have iscsi here, we use fc luns | 17:08 |
otaku2 | but can present the same luns to multiple machines | 17:08 |
Kiall | otaku2: nova-volume supports using "drivers" to connect to different kinds of SAN's | 17:08 |
Kiall | (This is for amazon EBS style volumes you attach to instances, rather than attaching to physical nodes) | 17:09 |
Kiall | Out of the box it has a driver for HP's P4000 SAN and 1 other.. | 17:09 |
otaku2 | I guess what I dont yet understand is what swift does vs nova-volume | 17:10 |
otaku2 | but I'm still rtfm | 17:10 |
Kiall | otaku2: are you familiar with amazon EBS and amazon S3? | 17:10 |
otaku2 | nope | 17:10 |
otaku2 | veritas virtual cluster server and xenserver | 17:10 |
Kiall | Okay - S3 is a object store ie you cant mount it, and you can only talk to it with HTTP | 17:10 |
Kiall | Okay - swift/S3 is a object store ie you cant mount it, and you can only talk to it with HTTP | 17:11 |
otaku2 | but when you say object store, that's not VM disk | 17:11 |
stevegjacobs | Kiall - I just tried termintating (using dashboard) and now it shows as active, but no 'fixed' ip? | 17:11 |
otaku2 | what would you need an object store for besides maybe vm images | 17:11 |
foexle | Kiall: hahaha you have precasted answers ? :D | 17:11 |
Kiall | And nova-volume/EBS is a block store you attach to instances.. So you can create a 10GB or so disk at it will be available inside an instance as "/dev/sdX" | 17:12 |
*** pixelbeat has quit IRC | 17:12 | |
Kiall | neither nova-volume or swift are used to back running VM image files.. You can use SAN storage by simply mounting it in the right place for that | 17:13 |
*** marcuz has quit IRC | 17:13 | |
Kiall | otaku2: lets say you had millions of images to store (like say, flickr), swift would be ideal for that... | 17:13 |
otaku2 | thats the issue - how would you mount san disks directly on a vm without presenting them to the compute node they are running on? That doesn't make sense. | 17:14 |
otaku2 | I used to work for a cdn, I have no use for that lol | 17:14 |
afm | finally got the dashboard installed…. tenant not found for user admin… when logging in.. keystone-manage tenant list reveals admin ? | 17:14 |
mjfork | otaku2: if you want to host the disk image containing the OS on a SAN, you need to look at the boot-from-volume extension | 17:14 |
foexle | you can see swift like an asset host | 17:14 |
*** adjohn has joined #openstack | 17:15 | |
*** lloydde has joined #openstack | 17:15 | |
otaku2 | so if people don't use san storage for vm disk, what do they use? local storage? That's a great way to lose your vms | 17:15 |
Kiall | otaku2: Okay, Assuming you have a iSCSI SAN (Nova likes iSCSI) nova-compute attaches to the iSCSI export, resulting in a /dev/sdX on the physical node.. it then passes that into the VM | 17:15 |
magg | bah i still don't know how to test the swift auth with keystone... | 17:15 |
*** adjohn has quit IRC | 17:15 | |
mjfork | otaku2: you are right - if you need peristatnt data look at nova-voume | 17:16 |
*** pradeep has quit IRC | 17:16 | |
Kiall | otaku2: OpenStack is not designed to be HA.. But lots of people do mount /var/lib/nova/instances on a SAN... | 17:16 |
*** lorin1 has quit IRC | 17:16 | |
mjfork | otaku2: the default is very much, hardware fails, vm fails, deal with it and work around it a at the application/workload layer. | 17:16 |
Kiall | ie Nova doesnt need any inbuilt support to be able to keep VM images on a SAN.. | 17:17 |
mjfork | mjfork: if you need HA VMs, you have to do work | 17:17 |
otaku2 | ergh. that's not what I thought this was then. | 17:17 |
mjfork | otaku2: what are your use cases? | 17:17 |
Kiall | stevegjacobs: weird.. did it terminate eventually? | 17:17 |
foexle | mjfork: not really my instances are all stateless | 17:17 |
otaku2 | VM's that can be moved with instant failover from compute node live | 17:17 |
mjfork | here is a qusetion - would you run the workoads you are targeting for OS on EC2? | 17:17 |
otaku2 | vmware and xenserver do this | 17:18 |
otaku2 | no, too costly. | 17:18 |
otaku2 | thats why we buy our own gear :) | 17:18 |
mjfork | KVM does HA instances too - its a function of the software stack and what it is tryign to accomplish. you CAN do it with openstack, but it is not the default. | 17:18 |
*** cloudgeek has quit IRC | 17:18 | |
otaku2 | and likely not well documented? | 17:18 |
mjfork | read this thread from forums http://forums.openstack.org/viewtopic.php?f=16&t=536&p=1770&hilit=boot+from+volume#p1770 | 17:19 |
*** otaku2 has quit IRC | 17:19 | |
*** otaku2 has joined #openstack | 17:19 | |
Kiall | otaku2: nova can do live-migration as well... | 17:20 |
mjfork | otaku2: but are you ok with EC2s environment - ie.. if something fails you lose it - for yuor workloads | 17:20 |
*** pradeep has joined #openstack | 17:21 | |
otaku2 | we may have to make that part of the agreement for any users on it | 17:21 |
mjfork | i think that answer is no based on your earlier comments | 17:21 |
otaku2 | right now we are just evaluating it anyway | 17:22 |
mjfork | if you are installing OpenStack and expecting VMware with DRS & HA you will be disappointed | 17:22 |
otaku2 | but we have some boxes with very little local storage but access to a huge san. we were hoping to run vm's over the san. | 17:22 |
otaku2 | and be able to fail them over to another compute node | 17:22 |
stevegjacobs | kiall - I waited, then used dashboard to terminate again and finallly it did | 17:22 |
mjfork | why do you need to fail them over vs reprovision on a new node? | 17:22 |
*** mgoldmann has quit IRC | 17:23 | |
otaku2 | no data loss? | 17:23 |
stevegjacobs | I am still not able to launch this snapshot | 17:23 |
mjfork | is the data in the SAN? | 17:23 |
otaku2 | so are these really just throwaway nodes? no wonder rackspace hasn't moved anyoen over to this yet lol | 17:23 |
Kiall | stevegjacobs: and no errors in the logs? | 17:23 |
mjfork | the default is throaway | 17:23 |
mjfork | you need to make it HA | 17:23 |
otaku2 | yes thats what I've been saying all along - all data on the san period | 17:23 |
mjfork | or modify your workloads to fit in teh environment | 17:23 |
mjfork | data != OS | 17:24 |
stevegjacobs | I am starting again to see if I get errors this time and what they are | 17:24 |
mjfork | i can make an OS image, have it boot using loacl disk, and use data from a SAN | 17:24 |
otaku2 | hmm. a small paradigm shift. can you have multiple running vm's use the same single local disk image? | 17:25 |
otaku2 | cos disk space is an issue on commodity gear. which is what the san is for. | 17:25 |
mjfork | i don't know your worklaod | 17:25 |
Kiall | otaku2: you can run the VMs on the SAN if you like.. No configuration needed, just mount the SAN in the right place | 17:26 |
*** yshh has quit IRC | 17:27 | |
*** ahasenack has quit IRC | 17:27 | |
*** fridim_ has joined #openstack | 17:27 | |
*** vizsla_p has quit IRC | 17:27 | |
*** dotdevops has joined #openstack | 17:27 | |
Kiall | And, if you do that, you get live migration automatically (but not failover in case of H/W issues) | 17:27 |
*** can1 has joined #openstack | 17:27 | |
otaku2 | kiall, but you can't mount the same disk from more than one node without a clusterable file system | 17:27 |
*** Oneiroi has quit IRC | 17:28 | |
otaku2 | that's what I was tryign to determine if openstack had some sort of integration to move a disk so it is only mounted in one place | 17:28 |
Kiall | Yea - You have to provide a FS that can be mounted from multiple servers.. | 17:28 |
otaku2 | fun. :) | 17:30 |
*** koolhead11 has quit IRC | 17:30 | |
mjfork | otaku2: what I think kiall is saying is really us an NFS mount to host yrou VMs | 17:31 |
stevegjacobs | Kiall: stuck at 'build' again - nova-compute log in the compute node from launching: http://paste.openstack.org/show/3733/ | 17:31 |
mjfork | use that as your instances directory | 17:31 |
*** jog0 has joined #openstack | 17:31 | |
mjfork | they you have the your VMs persisted | 17:31 |
mjfork | stevegjacobs: what is in nova-scheduler, i was thinking build meant it never got handed uot to a node | 17:32 |
*** andrewbogott has joined #openstack | 17:33 | |
*** mrjazzcat has joined #openstack | 17:33 | |
*** yamahata_ has quit IRC | 17:33 | |
*** maploin has quit IRC | 17:33 | |
*** jog0 has left #openstack | 17:34 | |
otaku2 | now that I'm thinking about it, using nfs to a nas is way less overhead than implementing a clustered file system on the san. It's just that san is 4x faster than nas lol | 17:34 |
stevegjacobs | mjfork: nothing in novascheduler log, but lots referring to this launch in nova-compute log in the node it http://paste.openstack.org/show/3733/ | 17:35 |
stevegjacobs | I need to leave the office now and I will have to leave this till later. | 17:36 |
*** robbiew has joined #openstack | 17:36 | |
*** stevegjacobs has quit IRC | 17:37 | |
*** dendro-afk is now known as dendrobates | 17:39 | |
*** ppradhan has joined #openstack | 17:42 | |
*** whit_ has joined #openstack | 17:42 | |
ppradhan | Hi all, I need some input to diagnose an issue. When I run euca-get-console-output i-00000006 I am getting empty console | 17:43 |
ppradhan | instace looks to be runnoin... | 17:43 |
ppradhan | where should I check? | 17:43 |
Kiall | ppradhan: can you ping the instances IP? | 17:44 |
ppradhan | Kiall: no | 17:45 |
Kiall | just in case, let me rephrase that.. can you ping the instances IP from the nova-network node? | 17:45 |
Kiall | private IP rather than floating IP too ;) | 17:46 |
magg | im getting this error with swift: http://pastebin.com/1iANgWQq | 17:46 |
ppradhan | Kiall: well I don't knwo which IP it is getting.. any way to look? | 17:46 |
Kiall | euca-describe-instances would tell you | 17:47 |
ppradhan | INSTANCEi-00000006ami-00000003runningmykey (proj, None)0m1.tiny2011-12-09T17:36:02Zunknown zoneami-00000000ami-00000000 | 17:47 |
ppradhan | I have that only | 17:47 |
*** bryguy has quit IRC | 17:48 | |
Kiall | Okay, it looks like its not being assigned an IP, so the instance is failing to boot | 17:48 |
Kiall | the IP should be between "ami-00000003" and "running" | 17:48 |
Kiall | check the nova-network and nova-compute logs | 17:49 |
ppradhan | Kiall: oh.. | 17:49 |
*** lonetech007 has joined #openstack | 17:50 | |
ppradhan | Kiall: even I don't need vlan tagging right now, do I have to use the —vlan option? right now I am using FlatManager | 17:51 |
Kiall | No - if your not using VLAN's (ie with FlatManager or FlatDHCPManager) you dont need it | 17:52 |
ppradhan | Kiall: I have some doubt with my network configuration. | 17:53 |
Kiall | Not sure I can help there.. I've never used either the FlatManager or the FlatDHCPManager, I've no idea what setup is needed! | 17:53 |
*** bryguy has joined #openstack | 17:55 | |
*** jj0hns0n has joined #openstack | 17:55 | |
can1 | ppradhan: you can use the FlatManager only with Ubuntu based images that run the init-cloud stuff | 17:55 |
ppradhan | Kiall: http://pastebin.com/BCqq06vZ this is how it looks | 17:55 |
can1 | do a "brctl show " | 17:56 |
can1 | on you nova-compute node | 17:56 |
ppradhan | can1: oh.. so I will have to use the vlan even if I don't want to do vlan taggin? | 17:56 |
ppradhan | can1: here it i br1008000.5254002c89efnoeth0 | 17:57 |
can1 | or you can use the vncdisplay to attach to your instance and see what's doing | 17:57 |
*** koolhead17 has joined #openstack | 17:57 | |
mjfork | can1: i don't blieve this to be true "you can use the FlatManager only with Ubuntu based images that run the init-cloud stuff" - why do you say that? | 17:57 |
Kiall | "oh.. so I will have to use the vlan even if I don't want to do vlan taggin?" .. No | 17:58 |
mjfork | My understanding is FlatManager injects IPs and is indepdent of init-cloud | 17:58 |
kbringard | it doesn't have to | 17:58 |
Kiall | FlatManager and FlatDHCPManager dont use VLAN's | 17:58 |
can1 | because your baseline image has to know how to get the IP address from the network node, which doesn't run DHCP | 17:59 |
can1 | When you choose Flat networking, Nova does not manage networking at all. Instead, IP addresses are injected into the instance via the file system (or passed in via a guest agent). Metadata forwarding must be configured manually on the gateway if it is required within your network. | 17:59 |
kbringard | if you have a DHCP server on your network (and ideally tie it into nova-dhcpbridge) and then setup the 169.254.169.254 rule on your upstream router you don't have to inject the IP | 17:59 |
can1 | http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-flat-networking.html | 17:59 |
*** jdurgin has joined #openstack | 18:00 | |
*** rustam has quit IRC | 18:01 | |
ppradhan | here is my nova.conf if you need to look http://pastebin.com/Q5LdWGtr | 18:01 |
*** Aaron|away has joined #openstack | 18:01 | |
*** mrjazzcat has left #openstack | 18:01 | |
mjfork | ppradhan: use VNC to connect to teh console and run ifconfig | 18:03 |
can1 | ppradhan: read this guide, it will help: http://cssoss.files.wordpress.com/2011/11/openstackbookv2-0_csscorp.pdf | 18:03 |
magg | guys Keystone does not return me x-storage-url | 18:04 |
magg | how can i fix that | 18:04 |
*** lts has quit IRC | 18:08 | |
can1 | ppradhan: is Glance running on your second server, 172.16.0.2 ? | 18:09 |
ppradhan | can1: no its on the 1st server. I've followed the same pdf file as you have pasted | 18:10 |
ppradhan | can1: on server 2 only nova-compute is runniong | 18:11 |
mjfork | ppradhan: did you use VNC to connect to the guest? | 18:12 |
ppradhan | mjfork : i am trying to.. but i don;t know the port | 18:13 |
mjfork | on the host, run virsh list | 18:13 |
mjfork | what is the instance? | 18:13 |
*** dendrobates is now known as dendro-afk | 18:14 | |
*** dendro-afk is now known as dendrobates | 18:14 | |
ppradhan | mjfork: virsh is empty.. | 18:14 |
can1 | ppradhan: your nova.conf says that Glance can be reached at 172.16.0.2, or that IP is on your second server | 18:14 |
mjfork | go to the other host | 18:14 |
ppradhan | mjfork: it should be running in the 2nd server which has nova-compute.. I don't have it there either.. vm has not started i beleive | 18:15 |
Kiall | Ramdon Off-topic question, but probably the right crowd! Sorry :) Anyone have anything good/bad to say about Areca RAID cards? | 18:16 |
ppradhan | can1: yes thats is a bit confusing. | 18:16 |
*** dysinger has joined #openstack | 18:16 | |
mjfork | explains empty console. | 18:16 |
mjfork | but euca-describe-isntances says running? | 18:17 |
ppradhan | yes its says running but no IP assiciated with that | 18:17 |
mjfork | ppradhan: run nova-manage vm lsit | 18:19 |
mjfork | nova-manage vm list that is | 18:19 |
mjfork | and look @ the node value | 18:19 |
ppradhan | thats empty | 18:19 |
ppradhan | can1: what is right way to define glance server? is it —glance_api_server or glance host? | 18:21 |
mjfork | ppradhan: can you paste nova-compute / nova-scheduler | 18:22 |
mjfork | the logs | 18:22 |
*** Ryan_Lane has joined #openstack | 18:22 | |
*** new2stack has quit IRC | 18:22 | |
can1 | ppradhan: --glance_api_servers=172.16.31.141:9292 | 18:24 |
ppradhan | mjfork: http://pastebin.com/safteXv3 —noave scheduler ,http://pastebin.com/FCjh6E3F — nova-computer | 18:25 |
*** GheRivero_ has quit IRC | 18:26 | |
*** pradeep has quit IRC | 18:26 | |
mjfork | ppradhan: do this - provisoin a new server and recpature the logs, posting them | 18:27 |
mjfork | provisioning was long enough ago that it doesn't conatin the provisoinging i see | 18:28 |
ppradhan | you mean a new vm instance? | 18:28 |
mjfork | yes | 18:29 |
ppradhan | ok | 18:29 |
sniperd | Im using the multi node install tutorial setup instructions and have run into the following error "Exception: Could not create account AUTH_system for user system:root" anyone seen this before? | 18:33 |
sniperd | thats when doing the curl to get an auth token | 18:33 |
*** mattray has quit IRC | 18:34 | |
*** NashTrash1 has quit IRC | 18:35 | |
mjfork | sniperd: swift? | 18:36 |
sniperd | mjfork: sorry -- yes | 18:36 |
*** katkee has joined #openstack | 18:36 | |
mjfork | are you using tempauth? | 18:37 |
sniperd | yes | 18:37 |
mjfork | what do you filter:tempauth entries look like in proxy-server.conf | 18:37 |
*** magg has quit IRC | 18:38 | |
*** holoway has quit IRC | 18:40 | |
sniperd | mjfork: https://gist.github.com/549b0dadb41f592801c5 | 18:40 |
*** egant has joined #openstack | 18:40 | |
ppradhan | mjfork: http://pastebin.com/WyaWKkhd here are new loggs. I have nulled the old logs | 18:40 |
*** holoway has joined #openstack | 18:41 | |
mjfork | sniperd: do you have teh account autocreate enabled in swift? | 18:41 |
ppradhan | mjfork: this is in the nova-compute node http://pastebin.com/HpLNVKM3 | 18:41 |
mjfork | ppradhan: sorry, should have specified enabling verbose mode in the conf file and capturing, my fault. | 18:42 |
mjfork | wait, this is better. | 18:42 |
ppradhan | mjfork: ok | 18:42 |
mjfork | so, yes. you clearly cannot connect to glance like can1 pointed out | 18:42 |
ppradhan | mjfork: the vm instance says its pending.. now if reboot the intsnace it will say running | 18:43 |
mjfork | there is no isntance as virsh list shows | 18:43 |
mjfork | and nova-manage vm list | 18:43 |
sniperd | mjfork: nope, I dont. That's not in the example configs, can you point me to what I should add? | 18:43 |
mjfork | it never gets that far | 18:43 |
ppradhan | I have change to --glance_api_servers=172.16.0.1:9292 | 18:44 |
ppradhan | in both serves | 18:44 |
ppradhan | glance is running on 176.16.0.1 | 18:44 |
mjfork | sniperd: i thought maybe that was the problem - check the logs for error messages for stack traces | 18:44 |
mjfork | ppradhan: did you restart the nova services after changign it? and you can telnet from the hosts to 176.16.0.1:9292 | 18:44 |
ppradhan | mjfork: yes I have rstarted and I can telnet to the port 9292 | 18:45 |
*** perestrelka has quit IRC | 18:45 | |
*** perestrelka has joined #openstack | 18:45 | |
mjfork | ppradhan: diablo? | 18:47 |
*** can1 has quit IRC | 18:47 | |
*** shaon has quit IRC | 18:47 | |
*** TiMMay333 has quit IRC | 18:47 | |
*** voxfiles has quit IRC | 18:47 | |
ppradhan | mjfork: yes diablo running on oneric | 18:49 |
*** whit_ has quit IRC | 18:50 | |
ppradhan | mjfork: what does the vm instance pending means? | 18:50 |
mjfork | ppradhan: it looks all good to me. did you restart nova-compute and nova-api and others? | 18:50 |
ppradhan | mjfork: yes | 18:50 |
mjfork | not sure, it all looks good, it is clearly not able to connect to glance from the compute node | 18:51 |
mjfork | could reboot and try again | 18:51 |
ppradhan | mjfork: i don't see that glance connect error after I restart nova-compute now | 18:51 |
uvirtbot | New bug: #902282 in nova "Nose does not test nova/tests/test_linux_net.py " [Undecided,New] https://launchpad.net/bugs/902282 | 18:51 |
mjfork | sniperd: can you pastbin your request | 18:51 |
ppradhan | mjfork: i think it is now connected | 18:51 |
mjfork | yeah? did you try and reprovision | 18:52 |
ppradhan | mjfork: yes | 18:52 |
ppradhan | it says pending | 18:52 |
ppradhan | here is the log http://pastebin.com/UVkGrgWv | 18:53 |
sniperd | mjfork: having trouble getting it to log its errors, but yes I will | 18:53 |
mjfork | ppradhan: there is nothing in that log showing a provsiioning | 18:54 |
uvirtbot | New bug: #902288 in nova "vm_utils _check_image_size inconsistant returns" [Undecided,New] https://launchpad.net/bugs/902288 | 18:55 |
*** fridim_ has quit IRC | 18:55 | |
sniperd | mjfork: this is from the actual curl: https://gist.github.com/6eefec0e0c601b97b2e8 | 18:56 |
*** mattray has joined #openstack | 18:58 | |
*** MarcMorata has joined #openstack | 19:00 | |
*** dendrobates is now known as dendro-afk | 19:00 | |
*** dpippenger has quit IRC | 19:00 | |
mjfork | what does the request look like .. i wonder if there is an error in itl | 19:00 |
*** hub-cap has joined #openstack | 19:01 | |
*** magg has joined #openstack | 19:01 | |
magg | yo | 19:01 |
magg | can i get some help with iptables | 19:01 |
magg | http://docs.openstack.org/diablo/openstack-object-storage/admin/content/part-i-setting-up-secure-access.html | 19:02 |
*** mattray has quit IRC | 19:02 | |
*** hub-cap_ has joined #openstack | 19:02 | |
*** pixelbeat has joined #openstack | 19:02 | |
ppradhan | mjfork: pastebin err.. try this http://pastebin.com/FnBhQfUf | 19:04 |
otaku2 | magg, what linux distribution? | 19:04 |
*** hub_cap has quit IRC | 19:04 | |
otaku2 | i.e. are you using a built in iptables or writing your own script? | 19:04 |
magg | ubuntu | 19:05 |
magg | built in iptables | 19:05 |
sniperd | mjfork: Cant get swift-proxy to log an error to begin with, trying to fix that | 19:05 |
*** heckj has joined #openstack | 19:05 | |
*** hub-cap has quit IRC | 19:05 | |
mjfork | there may not be an error as much as wrong values in teh reuqest? not sure. | 19:06 |
magg | i have these two: | 19:06 |
magg | iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT | 19:06 |
magg | iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT | 19:06 |
*** robbiew has quit IRC | 19:06 | |
*** mattray has joined #openstack | 19:07 | |
mjfork | ppradhan: that looks good - your provisioning made it farther. | 19:08 |
mjfork | does nova-manage vm list show anything | 19:08 |
ppradhan | it is not showing anything | 19:08 |
ppradhan | how does nova know in which it should start the instance? | 19:09 |
mjfork | nova-scheduelr | 19:09 |
mjfork | what does nova-manage service list show | 19:09 |
*** darraghb has quit IRC | 19:09 | |
ppradhan | i mean from nova.conf | 19:09 |
mjfork | it doesn't - nova-scheduler uses the list of known nova-compute services to delgate too | 19:10 |
ppradhan | ok.. | 19:10 |
ppradhan | i beleive this is the right way to start an instnace (?) euca-run-instances ami-00000003 -k mykey -t m1.tiny | 19:11 |
*** shaon has joined #openstack | 19:11 | |
mjfork | yes, assuming you have an ami with that # | 19:12 |
ppradhan | yes | 19:12 |
*** andrewbogott has quit IRC | 19:13 | |
*** andrewbogott has joined #openstack | 19:14 | |
ppradhan | mjfork: i did euca-reboot-instance # and now I can see in virsh list --all | 19:15 |
mjfork | how about nova-manage vm list | 19:15 |
*** andrewbogott has quit IRC | 19:15 | |
*** andrewbogott has joined #openstack | 19:16 | |
uvirtbot | New bug: #902297 in devstack "devstack overwrites screenrc" [Undecided,New] https://launchpad.net/bugs/902297 | 19:16 |
ppradhan | nothing in nova-manage vm lis | 19:16 |
mjfork | ppradhan: thats very interesting | 19:17 |
ppradhan | mjfork: I am cnnected with vnc | 19:17 |
ppradhan | mjfork: It says cloud-init waiting for 120 seconds for a network device | 19:17 |
mjfork | ok | 19:18 |
mjfork | cool | 19:18 |
sniperd | mjfork: the proxy-server itself never returns a stacktrace that I can tell(running it in the foreground), All I have is the gist https://gist.github.com/6eefec0e0c601b97b2e8 | 19:18 |
*** dendro-afk is now known as dendrobates | 19:18 | |
mjfork | i would suggest the general mailing lsit | 19:19 |
Razique | ok back guys | 19:19 |
Razique | first soren mjfork foexle thanks a lot for your help | 19:19 |
sniperd | mjfork: ok thank you | 19:19 |
ppradhan | mjfork: the vm is on but with no n/w :) | 19:21 |
*** whit has joined #openstack | 19:22 | |
mjfork | ppradhan: yeah, so you have 2 nodes - is nova-network running on 2nd node? | 19:22 |
ppradhan | mjfork: no… http://pastebin.com/BCqq06vZ | 19:23 |
mjfork | what is your ec2_api setting in nova.conf on hos2 | 19:25 |
ppradhan | i don;t have than entry | 19:26 |
*** yshh has joined #openstack | 19:27 | |
*** mdomsch has joined #openstack | 19:28 | |
ppradhan | mjfork: they look like http://pastebin.com/dphEmYRT | 19:29 |
*** laclasse has joined #openstack | 19:30 | |
bsza | why is max_connections set to 2 in rsynd.conf in the swift examples? Experimentally derived? | 19:30 |
*** yshh has quit IRC | 19:31 | |
*** shaon has quit IRC | 19:33 | |
*** dendrobates is now known as dendro-afk | 19:34 | |
mjfork | --ec2_url=http://127.0.0.1:8773/services/Cloud | 19:34 |
*** rnirmal has joined #openstack | 19:34 | |
Razique | ok i'm off | 19:35 |
Razique | looks like I've 3 broken instances | 19:35 |
ppradhan | mjfork: ok | 19:35 |
foexle | Razique: wahhhhhhhhhhhhhhhhhhhhhat ? :D | 19:35 |
Razique | foexle: the site is up | 19:35 |
Razique | :) | 19:35 |
Razique | but | 19:35 |
foexle | Razique: so nice evening :) | 19:35 |
Razique | three instances seem broken | 19:35 |
Razique | i'll respawn | 19:35 |
Razique | the data is on a volume | 19:35 |
Razique | so it's no that biggy | 19:36 |
Razique | againt thanks a lot my friend | 19:36 |
*** jj0hns0n has quit IRC | 19:36 | |
Razique | support is greatly appreciated in moments like this :) | 19:36 |
Razique | it's like it''ll never end, and we hate our job =D | 19:36 |
*** dpippenger has joined #openstack | 19:36 | |
foexle | Razique: :D oh yeah ^^ i know :D | 19:36 |
Razique | so basically, the switch I installed was falulty | 19:37 |
foexle | Razique: my cloud is ha now :> | 19:37 |
Razique | foexle: orly ? | 19:37 |
*** gohko_nao has quit IRC | 19:37 | |
Razique | we really need to talk about strategies | 19:37 |
foexle | :> | 19:37 |
Razique | what happened today will fasten things up | 19:37 |
Razique | ok biye guys, fricking tired | 19:38 |
foexle | we should do this ;) .... but the openstack network ... are verry tricky | 19:38 |
Razique | foexle: you tell me | 19:38 |
foexle | because it's not designed for my tasks :> | 19:38 |
*** dendro-afk is now known as dendrobates | 19:38 | |
foexle | ribye byye ... i'll do ;) | 19:38 |
Razique | thanks mister :) you guys rox | 19:38 |
*** Razique has left #openstack | 19:38 | |
*** lonetech007 has quit IRC | 19:38 | |
*** lonetech007 has joined #openstack | 19:39 | |
*** gohko_nao has joined #openstack | 19:39 | |
*** jfluhmann has joined #openstack | 19:39 | |
*** dprince has quit IRC | 19:41 | |
*** foexle is now known as foexle-afk | 19:41 | |
*** hugokuo has quit IRC | 19:42 | |
ppradhan | mjfork: should we enable dhcp inside the instance or no? | 19:50 |
*** CaptTofu1 has quit IRC | 19:55 | |
mjfork | ppradhan: depends - what is your networking tuype in nova.conf | 19:56 |
*** daysmen has quit IRC | 19:56 | |
ppradhan | mjfork: it is flatmanager | 19:57 |
mjfork | then it is not DHCP, but injection based | 19:57 |
ppradhan | my understaning was the floating address wiill be assinged by nova-network | 19:58 |
mjfork | a floating address, yes | 19:58 |
mjfork | but that is not bound inside the isntance | 19:58 |
ppradhan | I could see floating not private | 19:58 |
ppradhan | not bound? | 19:59 |
mjfork | a floating ip is bound on the nova-network node | 19:59 |
ppradhan | this is one thing I am confused what floating ip is for | 19:59 |
mjfork | and fowrarded to the guest | 19:59 |
ppradhan | so the guest would get the private IP which we have define by nova-manage private .....? | 20:00 |
mjfork | yes | 20:00 |
ppradhan | so it means only one IP is seen from inside the instance.. ? | 20:01 |
ppradhan | and that is the private ip | 20:01 |
ppradhan | is that so | 20:01 |
ppradhan | ? | 20:01 |
*** adjohn has joined #openstack | 20:01 | |
mjfork | yes | 20:01 |
ppradhan | is there a way to check which floating ip is assined to my currrent running instance? | 20:01 |
mjfork | did you assign one explicltly? | 20:02 |
ppradhan | no | 20:02 |
mjfork | then it doesn't have one | 20:02 |
ppradhan | ok | 20:02 |
ppradhan | its optional? | 20:02 |
mjfork | yes | 20:03 |
ppradhan | ok | 20:03 |
ppradhan | I can only see "lo" if i do ifconfig -a | 20:04 |
mjfork | inside the guest? | 20:04 |
*** _adjohn has joined #openstack | 20:04 | |
mjfork | what kind fo guest is it? RHEL or Ubuntu? | 20:04 |
ppradhan | yes | 20:04 |
ppradhan | ubuntu | 20:04 |
*** hub_cap has joined #openstack | 20:05 | |
mjfork | on the host, virsh dumpxml <instance> | 20:06 |
mjfork | pastebin it | 20:06 |
*** adjohn has quit IRC | 20:07 | |
*** _adjohn is now known as adjohn | 20:07 | |
*** bhall has quit IRC | 20:07 | |
ppradhan | mjfork: its here http://pastebin.com/VCZ3fne4 | 20:07 |
*** hub-cap_ has quit IRC | 20:08 | |
mjfork | hmm, i see now network device | 20:09 |
mjfork | i see NO network device | 20:09 |
ppradhan | yeah | 20:10 |
mjfork | thats the problem | 20:10 |
mjfork | not sure why it is like that | 20:10 |
ppradhan | this haas to do with nova-network? | 20:10 |
mjfork | no, nova-compute | 20:10 |
ppradhan | ok | 20:10 |
mjfork | you haven't messed with templates or anything | 20:10 |
ppradhan | no | 20:10 |
uvirtbot | New bug: #902316 in keystone "PPA build failing on keystone version" [Critical,In progress] https://launchpad.net/bugs/902316 | 20:11 |
mjfork | hmm, i am out of ideas. | 20:12 |
mjfork | nothing in the nova provisoiing looks suspect | 20:12 |
ppradhan | oh.. | 20:12 |
mjfork | is that an oh... of rememberd something .. or as in thats no good that i am out of ideas? | 20:13 |
ppradhan | :) no.. i am confused | 20:13 |
mjfork | are you out of networks/ips? | 20:14 |
ppradhan | no.. I nova-manage nerwork is assigned with only private pool | 20:14 |
*** dendrobates is now known as dendro-afk | 20:14 | |
mjfork | nova-manage network list | 20:15 |
ppradhan | this is odd.. Command failed, please check log for more info | 20:15 |
ppradhan | it had an etry | 20:16 |
ppradhan | entry | 20:16 |
*** lorin1 has joined #openstack | 20:17 | |
mjfork | what is nova-manage log | 20:20 |
*** adjohn has quit IRC | 20:20 | |
ppradhan | (nova): TRACE: NetworkNotCreated: --bridge_interface is required to create a network. | 20:20 |
*** TheOsprey has quit IRC | 20:21 | |
ppradhan | I have a bridge created in nova-compute host | 20:21 |
mjfork | deos nova.conf have --bridge_interface | 20:22 |
*** cole has joined #openstack | 20:22 | |
*** tcampbell has joined #openstack | 20:24 | |
ppradhan | no | 20:24 |
mjfork | add one on the nova-network node | 20:24 |
ppradhan | ok | 20:24 |
*** dendro-afk is now known as dendrobates | 20:27 | |
*** cereal_bars has quit IRC | 20:29 | |
*** nickon has joined #openstack | 20:33 | |
*** perestrelka has quit IRC | 20:34 | |
*** localhost has quit IRC | 20:36 | |
*** ninkotech has joined #openstack | 20:37 | |
*** mgoldmann has joined #openstack | 20:37 | |
*** localhost has joined #openstack | 20:38 | |
*** perestrelka has joined #openstack | 20:38 | |
*** cole has quit IRC | 20:40 | |
*** lorin1 has left #openstack | 20:47 | |
*** lorin1 has joined #openstack | 20:47 | |
*** adjohn has joined #openstack | 20:48 | |
*** hazmat has quit IRC | 20:49 | |
*** hazmat has joined #openstack | 20:49 | |
*** bhall has joined #openstack | 20:50 | |
*** dendrobates is now known as dendro-afk | 20:52 | |
*** mdomsch has quit IRC | 20:57 | |
*** jsh has joined #openstack | 20:58 | |
*** mgoldmann has quit IRC | 21:05 | |
*** mgoldmann has joined #openstack | 21:05 | |
*** cschauer has joined #openstack | 21:08 | |
*** nickon has quit IRC | 21:11 | |
*** btorch has quit IRC | 21:12 | |
*** jblesage has joined #openstack | 21:20 | |
*** mchenetz has quit IRC | 21:22 | |
*** jblesage has quit IRC | 21:24 | |
uvirtbot | New bug: #902346 in nova "Need to add support for X-Forwarded-For header " [Undecided,New] https://launchpad.net/bugs/902346 | 21:26 |
*** lonetech007 has quit IRC | 21:27 | |
*** hub_cap has quit IRC | 21:27 | |
*** NashTrash1 has joined #openstack | 21:27 | |
*** swill has joined #openstack | 21:32 | |
*** swill has left #openstack | 21:32 | |
*** swill has joined #openstack | 21:33 | |
swill | is there any auth middleware that currently works with the swift3 middleware? | 21:33 |
swill | that i can use as a reference. | 21:33 |
swill | i am pretty sure keystone does not have the s3 token validator implemented and it looks like swauth is missing it too. | 21:34 |
swill | and tempauth seems to have exactly what swauth has. | 21:35 |
swill | but i could be wrong. | 21:35 |
*** joesavak has joined #openstack | 21:35 | |
uvirtbot | New bug: #885087 in devstack "easyinstall scripts in /usr/local/bin do not work " [Undecided,New] https://launchpad.net/bugs/885087 | 21:36 |
*** bryguy has quit IRC | 21:36 | |
*** bryguy has joined #openstack | 21:37 | |
magg | does nova saves instance information or files i need to remove after termination of an instance | 21:38 |
magg | if so, where can i find them? | 21:39 |
*** mgoldmann has quit IRC | 21:40 | |
*** PotHix has quit IRC | 21:44 | |
uvirtbot | New bug: #902352 in tempest "Testcase: Write Testcases for Nova extension - security group /rules" [Undecided,New] https://launchpad.net/bugs/902352 | 21:45 |
*** otaku2 has quit IRC | 21:46 | |
*** Aaron|away has quit IRC | 21:46 | |
*** pothos_ has joined #openstack | 21:53 | |
*** pothos has quit IRC | 21:54 | |
*** pothos_ is now known as pothos | 21:54 | |
*** Oneiroi has joined #openstack | 21:55 | |
*** Oneiroi has joined #openstack | 21:55 | |
*** jakedahn has joined #openstack | 21:56 | |
uvirtbot | New bug: #902357 in keystone "Keystone doesn't allow a token to be scoped to multiple tenants" [Undecided,New] https://launchpad.net/bugs/902357 | 21:56 |
uvirtbot | New bug: #902358 in tempest "Testcase: Write testcases for Nova Extension Floating IPs" [Undecided,New] https://launchpad.net/bugs/902358 | 21:56 |
magg | dudes | 21:56 |
magg | !!! | 21:56 |
openstack | magg: Error: "!!" is not a valid command. | 21:56 |
magg | http://docs.openstack.org/diablo/openstack-object-storage/admin/content/part-i-setting-up-secure-access.html | 21:56 |
*** katkee has quit IRC | 21:56 | |
*** fulanito has joined #openstack | 21:56 | |
magg | im trying to enable https for swift | 21:56 |
magg | but this URL: https://yourswiftinstall.com:11000/v1.0 | 21:57 |
fulanito | hi | 21:57 |
*** lorin1 has quit IRC | 21:57 | |
magg | why does it use port 11000 | 21:57 |
*** dubsquared has quit IRC | 21:57 | |
*** Oneiroi has quit IRC | 21:57 | |
magg | and does keystone work with https | 22:00 |
uvirtbot | New bug: #902360 in tempest "Testcase: Write Testcases for Quota Operations" [Undecided,New] https://launchpad.net/bugs/902360 | 22:01 |
uvirtbot | New bug: #902361 in horizon "Unable to delete snapshots" [Undecided,New] https://launchpad.net/bugs/902361 | 22:01 |
*** heckj has quit IRC | 22:03 | |
magg | do i need to modify the endpointTemplate? | 22:04 |
*** aliguori_ has joined #openstack | 22:05 | |
*** aliguori has quit IRC | 22:06 | |
*** tcampbell has quit IRC | 22:08 | |
*** katkee has joined #openstack | 22:10 | |
*** Aaron|away has joined #openstack | 22:16 | |
*** stevegjacobs has joined #openstack | 22:17 | |
*** whenry has joined #openstack | 22:20 | |
*** katkee has quit IRC | 22:20 | |
*** laurent\ has joined #openstack | 22:20 | |
*** bcwaldon has quit IRC | 22:21 | |
*** magg has quit IRC | 22:24 | |
uvirtbot | New bug: #902366 in tempest "Testcase: Write Testcases for Rebuild Server" [Undecided,New] https://launchpad.net/bugs/902366 | 22:26 |
*** imsplitbit has quit IRC | 22:27 | |
*** PeteDaGuru has left #openstack | 22:28 | |
*** newnode has joined #openstack | 22:28 | |
newnode | i'm getting a connection time out error for the mysql connection when i start the compute service on a 2nd server...i checked with tcpdump and the requests seem to be getting to the CC but no reply gets back to the 2nd server | 22:30 |
*** po has joined #openstack | 22:30 | |
newnode | here is the error log - http://paste.openstack.org/show/3741/ | 22:31 |
uvirtbot | New bug: #902371 in tempest "Testcase: write Testcases for Addresses" [Undecided,New] https://launchpad.net/bugs/902371 | 22:31 |
*** bsza has quit IRC | 22:33 | |
*** dspano has quit IRC | 22:38 | |
*** nerens has quit IRC | 22:39 | |
*** bryguy has quit IRC | 22:40 | |
uvirtbot | New bug: #902374 in tempest "Testcase: write Testcases for Volumes" [Undecided,New] https://launchpad.net/bugs/902374 | 22:41 |
*** koolhead17 has left #openstack | 22:42 | |
*** joesavak has quit IRC | 22:45 | |
*** fulanito has quit IRC | 22:48 | |
*** Aaron|away has quit IRC | 22:48 | |
*** rnorwood has quit IRC | 22:52 | |
*** bryguy has joined #openstack | 22:56 | |
*** kbringard has quit IRC | 22:59 | |
*** lborda has quit IRC | 23:06 | |
*** mwhooker has quit IRC | 23:15 | |
*** swill has quit IRC | 23:15 | |
*** vladimir3p has quit IRC | 23:16 | |
*** mwhooker has joined #openstack | 23:17 | |
*** vladimir3p has joined #openstack | 23:18 | |
*** vladimir3p has quit IRC | 23:18 | |
*** code_franco has quit IRC | 23:20 | |
*** mwhooker has quit IRC | 23:20 | |
*** rsampaio has quit IRC | 23:20 | |
*** markvoelker has quit IRC | 23:27 | |
*** yshh has joined #openstack | 23:31 | |
sniperd | Is there an issue with using LVM as the block storage for swift? | 23:37 |
sniperd | I cannot get it to work without setting mount-check to false | 23:37 |
*** jakedahn has quit IRC | 23:38 | |
*** rnirmal has quit IRC | 23:39 | |
*** krow has joined #openstack | 23:41 | |
*** fridim_ has joined #openstack | 23:42 | |
*** cp16net has quit IRC | 23:43 | |
*** dragondm has quit IRC | 23:44 | |
*** mwhooker has joined #openstack | 23:45 | |
*** mattray has quit IRC | 23:46 | |
*** krow has quit IRC | 23:46 | |
*** jfluhmann has quit IRC | 23:46 | |
*** ppradhan has left #openstack | 23:47 | |
*** aliguori_ has quit IRC | 23:50 | |
uvirtbot | New bug: #902392 in tempest "Testcase: write Testcases for Keystone-services" [Undecided,New] https://launchpad.net/bugs/902392 | 23:51 |
uvirtbot | New bug: #902389 in tempest "Testcase: Write Testcases for Keystone - Roles" [Undecided,New] https://launchpad.net/bugs/902389 | 23:52 |
*** stewart has quit IRC | 23:53 | |
*** vladimir3p has joined #openstack | 23:54 | |
uvirtbot | New bug: #902393 in tempest "Testcase: Write Testcases for Keystone - Tenant" [Undecided,New] https://launchpad.net/bugs/902393 | 23:55 |
*** NashTrash1 has quit IRC | 23:57 | |
Ryan_Lane | I'm running diablo (via the ppa) and I'm deleting instances, and compute is deleting them, but for some reason describe instances shows them as running | 23:57 |
Ryan_Lane | and they won't delete.... | 23:57 |
*** stewart has joined #openstack | 23:58 | |
*** judd7 has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!