Thursday, 2012-02-09

*** al has quit IRC00:01
*** jj0hns0n has quit IRC00:01
*** al has joined #openstack00:01
*** sandywalsh has quit IRC00:02
*** pixelbeat has quit IRC00:03
*** Turicas has joined #openstack00:04
*** robbiew has quit IRC00:04
*** jakkudanieru has joined #openstack00:06
*** dolphm has joined #openstack00:07
*** Brlink has joined #openstack00:07
*** al has quit IRC00:08
*** al has joined #openstack00:08
*** miclorb_ has quit IRC00:09
*** al has quit IRC00:09
*** miclorb_ has joined #openstack00:09
*** al has joined #openstack00:10
*** al has joined #openstack00:11
*** shraddha has quit IRC00:13
*** jj0hns0n has joined #openstack00:14
*** ayoung has joined #openstack00:15
*** koolhead17 is now known as koolhead17|zzZZ00:15
*** sandywalsh has joined #openstack00:18
*** dwcramer has quit IRC00:18
*** littleidea has joined #openstack00:18
*** gyee has quit IRC00:18
*** dendro-afk is now known as dendrobates00:19
*** PaulMachTech has joined #openstack00:22
*** wonk has joined #openstack00:23
*** judd7 has quit IRC00:25
*** al has quit IRC00:25
*** al has joined #openstack00:25
*** albert23 has left #openstack00:26
*** al has quit IRC00:28
*** al has joined #openstack00:28
*** rnorwood has quit IRC00:28
*** Kyle__ has quit IRC00:28
*** andrewbogott is now known as andrewbogott[gon00:30
*** jakkudanieru has quit IRC00:32
*** dwcramer has joined #openstack00:33
*** miclorb_ has quit IRC00:34
*** jakkudanieru has joined #openstack00:35
*** taihen has quit IRC00:38
*** lloydde has quit IRC00:44
*** davepigott has quit IRC00:44
*** zzed has quit IRC00:46
*** shraddha_ has quit IRC00:46
*** sandywalsh has quit IRC00:48
*** MarkAtwood has quit IRC00:48
*** supriya has joined #openstack00:49
*** jj0hns0n has quit IRC00:50
*** taihen has joined #openstack00:50
*** gray-- has quit IRC00:51
*** jj0hns0n has joined #openstack00:54
*** PaulMachTech has quit IRC00:56
*** miclorb_ has joined #openstack00:56
*** vincentricci has joined #openstack00:58
*** adjohn has quit IRC00:58
*** livemoon has joined #openstack00:59
*** adjohn has joined #openstack00:59
*** deshantm has joined #openstack00:59
*** jj0hns0n has quit IRC00:59
*** jakedahn has quit IRC01:00
*** sandywalsh has joined #openstack01:00
*** stuntmachine has quit IRC01:00
*** kbringard1 has quit IRC01:03
*** ayoung has quit IRC01:03
*** dendrobates is now known as dendro-afk01:03
*** jj0hns0n has joined #openstack01:05
*** jakkudanieru has quit IRC01:06
*** andrewc has quit IRC01:08
*** dolphm has quit IRC01:10
*** supriya has quit IRC01:10
*** dolphm has joined #openstack01:11
*** mrjazzcat has quit IRC01:13
*** dolphm has quit IRC01:13
*** AndrewWeiss has quit IRC01:15
*** jj0hns0n has quit IRC01:16
*** rnorwood has joined #openstack01:17
*** warik has left #openstack01:18
*** rookie_tang has joined #openstack01:19
rookie_tangHi , everyone , i have a question , can i configure flatDHCP mode with every compute node has a nova-network service?01:21
*** serkamil has joined #openstack01:22
serkamilhello01:23
*** oubiwann has quit IRC01:26
*** ayoung has joined #openstack01:27
*** kbringard has joined #openstack01:30
*** jakedahn has joined #openstack01:31
*** AndrewWeiss has joined #openstack01:32
*** AndrewWeiss has quit IRC01:34
*** AndrewWeiss has joined #openstack01:34
*** DaOmarN_ has quit IRC01:35
*** AndrewWeiss has quit IRC01:36
*** mikal has quit IRC01:37
*** AndrewWeiss has joined #openstack01:37
*** mikal has joined #openstack01:38
*** AndrewWeiss has quit IRC01:39
*** aculich has quit IRC01:40
*** bsza has quit IRC01:40
*** mszilagyi has quit IRC01:46
*** miclorb_ has quit IRC01:46
*** dolphm has joined #openstack01:48
*** dwcramer has quit IRC01:48
*** mgius has quit IRC01:49
*** wariola has joined #openstack01:51
*** jakkudanieru has joined #openstack01:52
*** nRy has joined #openstack01:54
*** deshantm_ has joined #openstack01:56
*** mnewby has quit IRC01:57
*** littleidea has quit IRC01:58
*** supriya has joined #openstack01:59
*** littleidea has joined #openstack01:59
*** deshantm has quit IRC02:00
*** deshantm_ is now known as deshantm02:00
*** phschwartz|rem has quit IRC02:00
*** dwcramer has joined #openstack02:02
*** miclorb_ has joined #openstack02:03
*** mattstep has quit IRC02:03
*** mattstep has joined #openstack02:04
*** ccc3 has joined #openstack02:06
*** Yak-n-Yeti has joined #openstack02:11
*** ohnoimdead has quit IRC02:14
*** kbringard has quit IRC02:16
*** cryptk is now known as cryptk|offline02:21
*** AndrewWeiss has joined #openstack02:21
AndrewWeisshey, is anyone available for help in the chatroom?02:21
*** supriya has quit IRC02:22
*** cryptk|offline is now known as cryptk02:23
*** erkules has quit IRC02:26
*** erkules has joined #openstack02:27
*** cryptk is now known as cryptk|offline02:28
*** Pr0toc0l has joined #openstack02:28
*** Clow has joined #openstack02:30
*** Brlink has quit IRC02:30
notmynameAndrewWeiss: most people are active in here during US business hours. what do you need help with?02:30
*** _adjohn has joined #openstack02:30
AndrewWeissnotmyname: yea, i figured that…i've been having an ongoing issue with euca-tools and keystone02:31
*** _adjohn has quit IRC02:31
notmynamesorry to hear that. unfortunately, I don't know enough to help since I don't use those parts of openstack02:31
AndrewWeissok no worries, thanks though02:31
*** adjohn has quit IRC02:33
*** ayoung has quit IRC02:33
*** adjohn has joined #openstack02:34
*** adjohn has quit IRC02:35
*** bengrue has quit IRC02:36
*** nand_nanda_21 has joined #openstack02:36
*** ayoung has joined #openstack02:39
*** Pr0toc0l has quit IRC02:40
*** swills has joined #openstack02:41
*** swills has left #openstack02:43
*** oubiwann has joined #openstack02:45
*** ayoung has quit IRC02:46
*** AndrewWeiss has quit IRC02:48
*** mnewby has joined #openstack02:55
*** rnorwood has quit IRC02:57
*** supriya has joined #openstack03:00
*** vincentricci has left #openstack03:04
*** nati2 has quit IRC03:06
*** HugoKuo_ has joined #openstack03:07
*** andrewbogott[gon is now known as andrewbogott03:10
*** hugokuo has quit IRC03:11
*** jkyle has quit IRC03:12
*** dayou has quit IRC03:18
*** dayou has joined #openstack03:20
*** supriya has quit IRC03:22
*** rackerhacker has quit IRC03:24
*** rackerhacker has joined #openstack03:27
*** jdurgin has quit IRC03:27
*** jkyle has joined #openstack03:32
*** martine has joined #openstack03:36
*** Gordonz has joined #openstack03:37
*** maplebed has quit IRC03:43
*** oubiwann has quit IRC03:51
*** hadrian has quit IRC03:52
*** osier has joined #openstack03:53
*** oubiwann has joined #openstack03:54
*** aa has quit IRC03:54
*** avtar has quit IRC03:54
*** littleidea has quit IRC03:55
*** littleidea has joined #openstack03:56
*** aa has joined #openstack03:58
*** lloydde has joined #openstack03:59
*** nand_nanda_21 has quit IRC04:02
*** shevek_ has quit IRC04:08
*** dwcramer has quit IRC04:12
*** dolphm has quit IRC04:13
*** nati2 has joined #openstack04:14
*** redbo has quit IRC04:14
*** nati2 has quit IRC04:15
*** shraddha has joined #openstack04:17
shraddhahow do we upload application image to openstack?04:18
*** nati2 has joined #openstack04:18
*** Turicas has quit IRC04:20
deshantmshraddha: have you looked at glance for that?04:21
shraddhadeshantm: no.. can we do it using the dashboard?04:21
*** jakedahn has quit IRC04:23
*** MarkAtwood has joined #openstack04:24
deshantmshraddha: looks like work in progress (https://blueprints.launchpad.net/horizon/+spec/image-upload)04:25
deshantmshraddha: in the mean time, you should be able to use other tools or the api04:25
shraddhadeshhantm: thanks. So for now what do we yse?04:25
shraddhaeuca-tools?04:26
deshantmeuca-tools is one way, but it looks like there is a glance command to do it04:28
deshantmsee: http://devstack.org/stack.sh.html04:28
deshantm"Upload an image to glance"04:28
deshantmshraddha: see the glance add command example04:29
shraddhadeshantm: great. thanks. I will look it up04:29
deshantmshraddha: you may want to take a look at docs.openstack.org for other documentation04:30
deshantmshraddha: for example http://docs.openstack.org/diablo/openstack-image-service/admin/content/using-the-glance-cli-tool.html04:31
*** spiffxp has quit IRC04:32
shraddhadeshantm: thanks a lot!04:32
*** littleidea has quit IRC04:37
*** wariola has quit IRC04:37
*** littleidea has joined #openstack04:38
*** redbo has joined #openstack04:44
*** ChanServ sets mode: +v redbo04:44
*** MarkAtwood has quit IRC04:51
*** mdomsch_ has joined #openstack04:53
*** oubiwann has quit IRC04:55
*** CaptTofu1 has quit IRC04:55
*** wariola has joined #openstack04:56
*** koolhead17|zzZZ is now known as koolhead1705:00
*** CrazyThinker has quit IRC05:08
*** pitichai has joined #openstack05:11
*** micadeyeye has quit IRC05:13
*** micadeyeye_ has quit IRC05:13
*** martine has quit IRC05:13
*** redbo has quit IRC05:13
*** CrazyThinker has joined #openstack05:14
*** jakedahn has joined #openstack05:15
*** dwcramer has joined #openstack05:17
*** redbo has joined #openstack05:17
*** ChanServ sets mode: +v redbo05:17
*** Yak-n-Yeti has quit IRC05:17
*** nati2 has quit IRC05:19
*** koolhead17 has quit IRC05:28
*** mnewby has quit IRC05:28
*** russf has quit IRC05:31
*** nati2 has joined #openstack05:33
*** anticw has quit IRC05:38
*** natea_ has joined #openstack05:42
*** natea has quit IRC05:45
*** andrewbogott is now known as andrewbogott[gon05:45
*** Gordonz has quit IRC05:47
*** miclorb_ has quit IRC05:47
*** anticw has joined #openstack05:49
*** dwcramer has quit IRC05:53
*** egant has quit IRC05:54
*** cryptk|offline is now known as cryptk05:55
*** natea has joined #openstack05:57
*** russf has joined #openstack05:57
*** ldlework has quit IRC06:03
*** miclorb_ has joined #openstack06:03
*** shraddha has quit IRC06:05
*** dwcramer has joined #openstack06:09
*** deshantm has quit IRC06:17
*** lloydde has quit IRC06:28
*** bepernoot has joined #openstack06:34
*** bepernoot has quit IRC06:35
*** bepernoot has joined #openstack06:35
*** miclorb_ has quit IRC06:38
*** wonk has quit IRC06:40
*** nati2 has quit IRC06:42
*** nati2 has joined #openstack06:43
*** wariola has quit IRC06:44
*** dwcramer has quit IRC06:44
*** benner has quit IRC06:47
*** ghe_ has quit IRC06:47
*** rwmjones has quit IRC06:47
*** cliassom has quit IRC06:47
*** trevorj has quit IRC06:47
*** perestrelka has quit IRC06:47
*** jeblair has quit IRC06:47
*** SvenDowideit has quit IRC06:47
*** ikke-t has quit IRC06:47
*** Jbain has quit IRC06:47
*** chasmo has quit IRC06:47
*** comstud has quit IRC06:47
*** byeager has quit IRC06:47
*** iRTermite has quit IRC06:47
*** nilsson has quit IRC06:47
*** uvirtbot has quit IRC06:47
*** smoser has quit IRC06:48
*** chasmo has joined #openstack06:48
*** smoser has joined #openstack06:50
*** benner has joined #openstack06:51
*** ghe_ has joined #openstack06:51
*** rwmjones has joined #openstack06:51
*** cliassom has joined #openstack06:51
*** trevorj has joined #openstack06:51
*** perestrelka has joined #openstack06:51
*** jeblair has joined #openstack06:51
*** SvenDowideit has joined #openstack06:51
*** ikke-t has joined #openstack06:51
*** Jbain has joined #openstack06:51
*** comstud has joined #openstack06:51
*** byeager has joined #openstack06:51
*** iRTermite has joined #openstack06:51
*** nilsson has joined #openstack06:51
*** uvirtbot has joined #openstack06:51
*** ccc3 has quit IRC06:53
*** PiotrSikora has quit IRC06:55
*** wariola has joined #openstack06:55
*** PiotrSikora has joined #openstack06:56
*** bepernoot has quit IRC06:56
*** asavu has joined #openstack06:56
*** miclorb_ has joined #openstack06:56
*** nati2 has quit IRC07:07
*** mindpixel has joined #openstack07:07
*** wonk has joined #openstack07:11
*** jakedahn has quit IRC07:13
*** jakedahn has joined #openstack07:14
*** jakedahn_ has joined #openstack07:15
*** guigui has joined #openstack07:16
*** jakedahn has quit IRC07:18
*** jakedahn_ is now known as jakedahn07:18
*** nati2 has joined #openstack07:19
*** natea has quit IRC07:19
*** rookie_tang has quit IRC07:23
*** paulmillar has joined #openstack07:29
*** kaigan has joined #openstack07:31
*** kaigan has quit IRC07:34
*** miclorb_ has quit IRC07:38
*** zhenhua has quit IRC07:56
*** bepernoot has joined #openstack07:57
*** paulmillar has quit IRC07:58
*** nati2 has quit IRC08:01
*** littleidea has quit IRC08:02
*** bepernoo1 has joined #openstack08:06
*** reidrac has joined #openstack08:08
*** bepernoot has quit IRC08:09
*** saccharine has joined #openstack08:15
*** Brlink has joined #openstack08:17
*** Ramonster has joined #openstack08:17
*** binbash_ has quit IRC08:17
*** binbash_ has joined #openstack08:17
*** Clow has quit IRC08:18
*** aloga has joined #openstack08:20
*** yshh has joined #openstack08:21
*** sandywalsh has quit IRC08:24
*** shevek_ has joined #openstack08:26
*** arBmind has joined #openstack08:28
*** supriya has joined #openstack08:31
*** chasing`Sol has quit IRC08:33
*** katkee has joined #openstack08:35
*** paulmillar has joined #openstack08:37
*** al has quit IRC08:38
*** al has joined #openstack08:38
*** al has quit IRC08:40
*** al has joined #openstack08:40
*** Remco_ has joined #openstack08:40
*** darraghb has joined #openstack08:58
*** oneiroi has joined #openstack09:03
*** sandywalsh has joined #openstack09:05
*** derekh has joined #openstack09:09
*** nacx has joined #openstack09:13
*** xthaox has joined #openstack09:16
*** uksysadmin has joined #openstack09:16
*** maploin has joined #openstack09:20
*** maploin has joined #openstack09:20
*** aspiers has joined #openstack09:23
*** clauden_ has quit IRC09:23
*** derekh has quit IRC09:25
*** jakkudanieru has quit IRC09:26
*** jakkudanieru has joined #openstack09:27
*** clauden has joined #openstack09:27
*** pixelbeat has joined #openstack09:29
*** jakkudanieru has quit IRC09:30
*** jakkudanieru has joined #openstack09:31
*** almaisan-away is now known as al-maisan09:31
*** dev_sa has joined #openstack09:32
*** jakkudanieru has quit IRC09:33
*** jakkudanieru has joined #openstack09:34
*** praefect has quit IRC09:34
*** jantje_ has quit IRC09:34
*** jantje has joined #openstack09:34
*** exekias has quit IRC09:35
*** exekias has joined #openstack09:35
corXiDoes anyone know how long it will take to get your CLA thingy approved on launchpad? Created one yesterday...09:35
*** derekh has joined #openstack09:42
mattstepPretty quick, but make sure you update the wiki page, join the launchpad group, etc.09:43
corXimattstep: did all of such steps afaik....09:43
*** asavu_ has joined #openstack09:45
*** asavu has quit IRC09:47
*** asavu_ is now known as asavu09:47
*** Tristan|i3D has joined #openstack09:50
*** dxd828 has quit IRC09:51
Tristan|i3DI am building myself a Centos 6 image from a iso09:52
Tristan|i3Dcreate a centos.img09:52
Tristan|i3Dbut what is the container format?09:52
Tristan|i3Dimg?09:52
*** jkyle has quit IRC09:59
*** lxu has quit IRC10:03
*** lxu has joined #openstack10:04
Tristan|i3DI guess its raw10:05
*** oneiroi has quit IRC10:07
*** livemoon has quit IRC10:08
*** Triade has joined #openstack10:13
*** jeroenhn has joined #openstack10:14
*** ches has quit IRC10:15
*** ches has joined #openstack10:15
*** Brlink has quit IRC10:16
*** raimaz has joined #openstack10:19
*** jakedahn has quit IRC10:19
raimazquit10:21
*** raimaz has quit IRC10:22
*** raimaz has joined #openstack10:30
raimazpitichai:10:31
raimazopps10:31
*** PiotrSikora has quit IRC10:34
*** PiotrSikora has joined #openstack10:37
*** m2b_ has joined #openstack10:39
*** dayou has quit IRC10:44
*** jeroenhn has quit IRC10:45
*** rocambole has joined #openstack10:48
*** ergalassi has joined #openstack10:51
Tristan|i3DSo i have build a centos image10:56
Tristan|i3Dbut when I launch the image, the instance wont get any IP address10:56
Tristan|i3Dany idea whats wrong?10:56
*** faitoo has joined #openstack10:57
*** kyriakos has left #openstack11:04
uksysadminTristan|i3D: connect via vnc (computehost:5900, 5901, etc) and see why11:04
*** serkamil has quit IRC11:19
*** kyriakos has joined #openstack11:20
*** martianixor has joined #openstack11:29
*** mies has joined #openstack11:38
*** xthaox has quit IRC11:40
*** m2b_ has quit IRC11:45
*** hwestman_ has quit IRC11:47
*** hwestman has joined #openstack11:47
*** Triade has quit IRC11:48
*** livemoon has joined #openstack11:48
*** ahasenack has joined #openstack11:55
*** journeeman has joined #openstack11:59
*** chaosdonkey has joined #openstack11:59
*** rkukura has quit IRC12:00
*** jobicoppola has joined #openstack12:01
*** jobicoppola has quit IRC12:03
journeemanHi. The openstack-compute starter guide lists a flag `--cc_host' to be added to nova.conf (http://goo.gl/I75mT) but the reference for flags in nova.conf doesn't list it. What does the flag stand for? Do I need to add it in nova.conf?12:04
journeemanNova version is 2011.312:05
journeemanAppreciate any help :)12:05
journeemanDoes it stand for the host machine where nova-api is running?12:07
*** supriya has quit IRC12:11
Tristan|i3Djourneeman, where is the reference guide for nova.conf located?12:12
*** spampel has joined #openstack12:20
*** CaptTofu has joined #openstack12:20
spampelHello to everone! I have one question: Is it possible to resume an OpenStack Upload?12:21
journeemanTristan|i3D: http://goo.gl/nuOll12:21
*** dubey has joined #openstack12:21
*** bsza has joined #openstack12:21
dubeyhello12:21
journeemanTristan|i3D: It does not list all the flags but, nova-compute --help doesn't have it either12:22
spampelI mean an Upload to the Object Storage.12:22
*** pitichai has quit IRC12:23
zynzelkeystone bug, trunk: OperationalError: (OperationalError) attempt to write a readonly database u'INSERT INTO tokens12:24
*** andrewsmedina has quit IRC12:29
spampelDoes anyone know if Swift supports Content-Range Header for  PUT Requests?12:32
*** ahasenack has quit IRC12:34
*** ahasenack has joined #openstack12:35
*** zigo has joined #openstack12:35
zynzeloh, bad chmod, not bug ;)12:36
*** leifmadsen has joined #openstack12:41
*** zigo has quit IRC12:44
*** zigo has joined #openstack12:45
*** leifmadsen has quit IRC12:46
*** markvoelker has joined #openstack12:46
*** oneiroi has joined #openstack12:48
*** marrusl has joined #openstack12:48
*** ritesh has joined #openstack12:49
*** andrewsmedina has joined #openstack12:50
*** leifmadsen has joined #openstack12:50
*** zigo-_- has joined #openstack12:57
*** zigo has quit IRC12:57
*** guigui has quit IRC12:58
*** leifmadsen has quit IRC13:00
*** leifmadsen has joined #openstack13:00
*** gray-- has joined #openstack13:00
*** martianixor has quit IRC13:04
*** praefect has joined #openstack13:04
*** lts has joined #openstack13:04
*** Ryan_Lane has joined #openstack13:05
*** guigui1 has joined #openstack13:07
cliassomwho knows why the swift proxy returns 404 on step 2 when I follow http://swift.openstack.org/howto_installmultinode.html#create-swift-admin-account-and-test ?13:08
cliassomgoogling brings nothing13:10
*** hadrian has joined #openstack13:13
riteshiam able to ping a vm from my controller but not from outside the controller13:13
*** jelmer has quit IRC13:18
spampel@cliassom What did you get back from the first request?13:18
larissaspampel: Error: "cliassom" is not a valid command.13:18
cliassom@spampel HTTP/1.1 200 OK13:19
larissacliassom: Error: "spampel" is not a valid command.13:19
*** martianixor has joined #openstack13:19
cliassomwith X-Storage-Url, X-Storage-Token, X-Auth-Token13:20
spampelAre you using TempAuth?13:20
cliassomyep13:20
cliassomnow I'm considering using swauth13:21
spampelDid you specify a X-Storage-Url in the proxy.conf?13:21
cliassomyes, just like the manual says13:21
spampelCan you post the actual commandy that you used? (obfuscated)13:23
*** ritesh has quit IRC13:24
*** livemoon has quit IRC13:24
Tristan|i3Dmy instances wont get any dhcp / static ip address13:25
Kiallcgoncalves: yes, they work on single server. They actually work better on single server ;)13:25
Tristan|i3Dwhat am i doing wrong?13:25
Kiallphschwartz: I've never seen that happen, but it sounds like a routing issue to me.13:26
Tristan|i3Dsorry, note: the instance i am talking about is a instane created with my own centos image13:26
spampel#cliassom: Are you doing the SAIO setup? Or do you have multiple nodes?13:26
cliassomhere's proxy-server.conf http://pastebin.com/uiFxv6ae and here are commands http://pastebin.com/RBK6zs8M13:26
cliassommultiple nodes13:26
cliassomwhat's SAIO?13:27
spampelSwift All in One13:27
cliassomah, ok13:27
*** kyriakos has quit IRC13:27
*** jelmer has joined #openstack13:27
*** gray-- has quit IRC13:28
*** Remco_ has quit IRC13:28
spampelIs the IP you used in proxy-server.conf reachable from the PC you are testing with or is it the internal interface of the proxy13:28
spampel?13:28
cliassominvoking curl on the proxy machine13:29
*** jeroenhn has joined #openstack13:29
notmynamecliassom: spampel: I know what the issue is. let me find the config variable to paste13:29
spampelHow does your $PROXY_LOCAL_NET_IP look like?13:30
cliassom192.168.56.10513:30
notmynamecliassom: in the [app:proxy-server] section, set "account_autocreate = true", then restart the proxy server (swift-init proxy restart)13:30
cliassomok, thanks, will do13:31
spampelThat will work13:31
spampelI had the same problem. But didn't remember what the actual fix was.13:31
phschwartzkiall: That is what I am thinking. But I am not sure what the issue might be. I might get one of the networking guys to take a look.13:31
KiallIs there only 1 upstream router, just accessible over multiple VLANs?13:32
cliassomnotmyname: spampel: is it any way to do the manual modification request?13:32
Kiallphschwartz: As in .. is you had a default gateway for each interface, would all the GW IPs be the same physical router?13:32
notmynamecliassom: you mean updating the docs? or doing something non-automatically?13:33
*** dev_sa has quit IRC13:33
cliassomnotmyname: just ask openstack guys to update the doc13:33
cliassomwith account_autocreate = true13:33
phschwartzkiall: 2 switches, one on the external side and one on the internal side. The internal side is the one that has the hang up issue.13:33
Kiallphschwartz: the switches double as routers??13:34
*** maploin has quit IRC13:34
*** maploin has joined #openstack13:34
*** maploin has quit IRC13:35
*** maploin has joined #openstack13:35
KiallWhat I'm getting at is, if all the GW IPs for each VLAN are the same physical piece of hardware, regardless of the cables/switches used to get there or not..13:35
phschwartzYes, the external is a high end Force10 switch with level 2 routing internal. The internal side is a cisco super 2924xl with level 2 routing.13:35
phschwartzKiall: Ah, the answer to that is yes.13:35
*** perestre1ka has joined #openstack13:35
*** perestrelka has quit IRC13:35
KiallWait - That can't be a yes if you use a F10 and Cisco router ;)13:36
phschwartzWait, I just reread the question. I am still out of it due to this cold. lol13:36
*** ahasenack has quit IRC13:36
notmynamecliassom: actually, it's already in the docs (see the example config here: http://swift.openstack.org/howto_installmultinode.html#configure-the-proxy-node)13:37
phschwartzNo, the GW ips for the VLans are different. Which is why I am getting the routing issue. I will have to add a few static routes to the internal network between vlans.13:37
KiallIf the next router - an actual router i.e. the thing you would set as a default gateway in a simple setup - are the same, then you can likely get things working by disabling reverse path filtering on the router13:37
cliassomnotmyname: account_autocreate is absent there13:37
cliassomand yes, it helps, just checked that out13:38
*** dendro-afk is now known as dendrobates13:38
notmynamecliassom: ah. I was seeing allow_account_management. sorry13:38
phschwartzkiall: I didn't think about that. Let me try it and I will get back to it.13:38
phschwartz*s/it/you13:38
KiallRPF is used to drop packets from unverifiable IPs. Eg if a packet arrives at the router on its 1.0.0.0/24 nic, but has an source address in 2.0.0.0/24, the router will scrap the packet13:39
*** ahasenack has joined #openstack13:39
*** martine has joined #openstack13:39
KiallIf both router IPs point at the same piece of hardware, its usually enough to disable that and it should be happy..13:40
KiallIf its 2x routers, things get more complex.13:40
Kiall2x physcial routers*13:40
notmynamecliassom: https://review.openstack.org/#change,395813:40
phschwartzMaybe I will disable routing on the internal network and have it all use the router on the external network.13:40
phschwartzMight clean it up a bit.13:40
Kiallphschwartz: meaning the instances are only accessible over floating IPs?13:40
phschwartzkiall: That is what I was thinking. I only need the nodes really to communicate with eachother over the private, nothing else.13:41
Kiall(Thats the standard + correct setup IMHO .. And I really do mean in my opinion.. I've never seen anything official saying it should be done that way)13:41
phschwartzkiall: I will do that as I think it will be easier to maintain in the long run. :)13:42
ivoksi'm having problems connecting dashboard with nova... querying glance objects (images) works from dashboard, nova objects (like keypais) are accessible with nova CLI client, but dashboard complains with 'Unable to retrieve keypair list'. api-paste.ini looks ok (i guess)13:42
phschwartzHell if I need another block of floating ip's I can add them easy enough even though I started with a /2513:42
KiallNo matter what happens, when a computer can reach the outside world (ie outside its own subnets) via more than 1 interface, things go haywire13:43
phschwartzVery true.13:43
Kiallphschwartz: right, you can add more - even in a completely different subnet easily enough13:43
phschwartzkiall: exactly, and then only need to be routable on the public network.13:43
KiallExactly, the private side IMO is meant to only be accessible from nova nodes and instances..13:44
cliassomnotmyname: ok13:44
*** dwcramer has joined #openstack13:45
Kiall"CloudPipe" Nova's VPN stuff is intended to provide access to the private VLANs without floating IPs if needed, but I've not heard of anyone getting it working (yet) ;)13:45
Kiall(and I've never tried)13:45
phschwartzkiallL: ok, that switch to single routing works nicely. I will leave it like that.13:47
*** mra has joined #openstack13:48
*** dxd828 has joined #openstack13:52
*** hggdh has quit IRC13:52
*** daysmen has joined #openstack13:52
corXiKiall I have cloudpipe working just fine :P13:53
phschwartzWhat exactly is cloudpipe?13:53
*** shang has joined #openstack13:53
KiallI probably should have qualified that statement a tad..  "but I've not heard of anyone getting it working with keystone"13:54
corXiexcept it lacks keystone support....13:54
Kiall;)13:54
corXifor which I have a patch13:54
corXibut can't commit .... CLA :S13:54
Kiall-_-13:54
KiallCompany wont let you sign the CLA?13:54
corXiyeah I did sign CLA.... but waiting for admins to approve it on launchpad :S13:54
KiallOh.. when did you apply to the LP group?13:55
*** ahasenack has quit IRC13:55
*** ahasenack has joined #openstack13:56
corXiyesterday.... but there are some other pending requests from people in there (one is even 6days old)... so I'm not sure how often this pending list is checked/approved13:56
KiallPing one of these guys if you wany.. https://launchpad.net/~openstack-admins13:57
*** xthaox has joined #openstack13:57
KiallMost of them are in the -dev chan, and AFAIK they usually do the list every day13:57
corXithey're all in other timezone +8hrs .. except for one which isn't online :(13:58
KiallOlder entries are likely people who messed up, didnt use the right LP id or w/e13:58
corXiah okie13:58
*** osier has quit IRC13:59
*** shang has quit IRC14:00
annegentlettx: can you approve the pending CLA members? see ^^14:01
*** supriya has joined #openstack14:02
* ttx looks14:02
*** hadrian_ has joined #openstack14:02
*** supriya has quit IRC14:03
*** supriya has joined #openstack14:03
ttxKiall: you actually need to ping a core developer. They are the ones that can add new members.14:03
Kiallttx: not for me :)14:03
*** marrusl has quit IRC14:04
* ttx is not an admin of the membership in that group.14:04
KiallRight - I was wondering why anne suggested you ;)14:04
ttxcorXi: which project are you contributing for ?14:04
*** lborda has joined #openstack14:04
corXicompute I guess... cloudpipe....14:04
ttxhmm, actually I can. Let me know how I can fix you14:05
ttxlet me see*14:05
annegentleKiall: ttx oh interesting, ttx is in the openstack-admins group, but it's actually a core member needed14:05
*** uncleofthestick has joined #openstack14:05
annegentleer, that should have been a question?14:05
* Kiall didnt see ttx in the admins group14:05
*** hadrian has quit IRC14:06
*** hadrian_ is now known as hadrian14:06
* Kiall gets back to work.. or maybe a nap. Something that doesnt involve too much thinking!14:06
annegentleKiall: ttx is Theirry :)14:06
ttxI must inherit it from some admin group.14:06
*** daysmen has quit IRC14:06
KiallYea - I didn't notice his name on the group page, I always forget to click into the full member list!14:06
ttxcorXi: what's your name ?14:07
*** dendrobates is now known as dendro-afk14:07
*** sandywalsh has quit IRC14:07
corXittx: Cor Cornelisse14:08
ttxcorXi: done14:09
*** shang has joined #openstack14:09
corXittx: cool! Tnx a lot!14:10
*** hggdh has joined #openstack14:11
KiallNow all you have to do is figure gerrit out ;)14:11
*** mrjazzcat has joined #openstack14:12
corXiKiall: managed to get that stuff setup last time I committed something ... so that's easy :P14:12
*** daysmen has joined #openstack14:14
*** dwcramer has quit IRC14:14
Tristan|i3DKiall, http://docs.openstack.org/diablo/openstack-compute/starter/content/Uploading_to_OpenStack-d1e1534.html <- I dont have euca-* available with my diablo install, how come?14:14
ttxTristan|i3D: euca-* is in the euca2ools package14:15
KiallDid you install the euca2ools package14:15
Kiall?14:15
KiallAs detailed here ;) http://docs.openstack.org/diablo/openstack-compute/starter/content/Client_Tools-d1e1206.html14:15
Tristan|i3DKiall, no I did not. Not needed untill now. Glance add worked fine14:15
Kiallthe euca tools are alternatives to the nova and glance commands..14:16
Kiallthey use the EC2 compatible API, rather than the OpenStack native API14:16
*** deshantm has joined #openstack14:16
Tristan|i3DI figured.14:16
Kiallannegentle: BTW, Are there any plans to update the docs to use the native tools primarily?14:16
Tristan|i3D^ would like to know, issue now for me14:17
uvirtbotTristan|i3D: Error: "would" is not a valid command.14:17
KiallI've noticed it causing all sorts of confusion with A) What does Eucalyptus have to do with OpenStack, and B) Keystone14:17
Tristan|i3Dright14:17
Tristan|i3DI need to figure out the openstack native command for uploading ubuntufinal.img14:18
Kiallthat's `glance add`14:18
Kiallhttps://github.com/managedit/openstack-setup/blob/master/glance-upload-oneiric.sh14:18
Tristan|i3DI know14:18
Tristan|i3Dbut with the --kernel etc14:18
KiallAvoid using external kernels wherever possible...14:19
KiallSame for ramdisk...14:19
KiallIf you use an external kernel/ramdisk - you can never update the kernel inside the instance..14:19
Tristan|i3Dwell I followed the 'creating linux image ubuntu' guide14:19
Kiallglance add name="ubuntu-final" is_public=true container_format=ovf disk_format=qcow2 < ubuntufinal.img14:19
*** stuntmachine has joined #openstack14:20
Kiallassuming its a qcow2 image14:20
*** sandywalsh has joined #openstack14:20
*** dendro-afk is now known as dendrobates14:20
Tristan|i3Dhow do I know its a qcow2 image?14:20
Tristan|i3Dif following the guide14:21
KiallDid you create an qcow2 image, or a raw image?14:21
Kiall(I've no clue what the guide says BTW)14:21
Tristan|i3Dsudo kvm -m 256 -cdrom ubuntu-11.10-server-amd64.iso -drive file=server.img,if=scsi,index=0 -boot d -net nic -net user -nographic  -vnc :014:21
*** dev_sa has joined #openstack14:21
KiallWhat did you use to create "server.img"14:21
Tristan|i3Dsorry14:21
Tristan|i3Dwrong paste14:21
Tristan|i3Dkvm-img create -f raw server.img 5G14:21
Tristan|i3Dso raw :)14:21
KiallRight, so its a raw image rather than qcow214:22
*** marrusl has joined #openstack14:22
*** ergalassi has quit IRC14:23
Tristan|i3DSo the guide should say qcow2 instead?14:23
KiallNo - Either work14:23
Kiallyou just need to tell glance add what kind of image you are uploading14:24
Kiallglance add name="ubuntu-final" is_public=true container_format=ovf disk_format=qcow2 < ubuntufinal.img14:24
Kiallvs14:24
Kiallglance add name="ubuntu-final" is_public=true container_format=ovf disk_format=raw < ubuntufinal.img14:24
Tristan|i3Dright14:24
Tristan|i3Dnova@server10326:~$ glance -A 3529c3aa-d616-4ee5-96a5-6134d89e0cf6 index14:25
Tristan|i3DNot authorized to make this request. Check your credentials (OS_AUTH_USER, OS_AUTH_KEY, ...).14:25
Tristan|i3DI think I passed the 24h period :)14:25
*** ayoung has joined #openstack14:25
Kiall;)14:26
Tristan|i3Dhow come I need to add -A14:26
Tristan|i3Dwhere most guides says none of this14:26
*** dolphm has joined #openstack14:26
Kiallbecause, as it says, you haven't set the OS_AUTH_USER, OS_AUTH_KEY etc env variables14:26
Tristan|i3Dahh14:26
Tristan|i3Dok added14:29
*** stuntmachine has quit IRC14:29
Tristan|i3Dlets see if it runs14:29
*** nilsson_ has joined #openstack14:29
*** comstud_ has joined #openstack14:30
*** uvirtbot has quit IRC14:30
*** ghe_ has quit IRC14:30
*** ikke-t has quit IRC14:30
*** rwmjones has quit IRC14:30
*** nilsson has quit IRC14:30
*** comstud has quit IRC14:30
*** rwmjones has joined #openstack14:30
*** ghe_ has joined #openstack14:30
Tristan|i3Dnope :(14:30
*** ikke-t has joined #openstack14:31
Tristan|i3Dhmmm14:31
*** uvirtbot has joined #openstack14:31
BasTichelaaris anyone using keystone backed by ldap?14:31
Tristan|i3Dstill having issues BasTichelaar ;)14:32
Tristan|i3Dsorry I do not, cant help14:32
BasTichelaarTristan|i3D: issues with what?14:33
Tristan|i3DBasTichelaar, keystone + ldap, you were here yesterday14:33
BasTichelaarah ok14:33
BasTichelaarwell, I managed to get keystone working fine, and I can create and remove tenants using the api14:33
Tristan|i3Dcool14:34
BasTichelaarbut was wondering if it would make sense to switch the backend to ldap14:34
Tristan|i3Dgot it documented?14:34
BasTichelaarwhat, setting up keystone?14:34
BasTichelaarI can paste it in a pastebin or so14:34
Tristan|i3Dah thought with ldap14:34
BasTichelaarok14:34
BasTichelaarwill try it out now14:35
BasTichelaarIll let you know14:35
BasTichelaarbut not sure if there are any advantages in using ldap over db backend14:35
*** littleidea has joined #openstack14:35
*** zigo-_- has quit IRC14:36
*** stuntmachine has joined #openstack14:36
BasTichelaarbut it seems even the developers are discussing about it14:36
BasTichelaarhttp://www.mail-archive.com/openstack@lists.launchpad.net/msg07151.html14:36
BasTichelaarhmm, interesting, keystone light14:37
BasTichelaarhttps://github.com/termie/keystonelight14:37
*** mattray has joined #openstack14:37
*** dendrobates is now known as dendro-afk14:38
*** dendro-afk is now known as dendrobates14:38
*** littleidea has quit IRC14:38
*** dprince has quit IRC14:38
*** dovetaildan has joined #openstack14:38
Tristan|i3DStill To Do14:38
Tristan|i3D        LDAP backend14:38
*** littleidea has joined #openstack14:39
*** mdomsch_ has quit IRC14:40
*** mdomsch has quit IRC14:40
*** Vek has quit IRC14:41
*** mdomsch has joined #openstack14:41
BasTichelaarTristan|i3D: haha, yes14:41
BasTichelaarTristan|i3D: but its interesting: http://robhirschfeld.com/2012/01/30/openstack-keystone-makes-smart-bold-move-to-improve-quality/14:42
*** mdomsch_ has joined #openstack14:43
*** dolphm has quit IRC14:43
Tristan|i3DBasTichelaar, true14:43
*** mattray has quit IRC14:44
*** MarkAtwood has joined #openstack14:45
*** esker has joined #openstack14:45
*** jj0hns0n has joined #openstack14:46
*** mattray has joined #openstack14:46
*** katkee has quit IRC14:47
*** zigo has joined #openstack14:47
Tristan|i3DSo I dont understand one part of the OS installation guide of a custom ubuntu image. Does the single / ext4 partition needs to be bootflagged? As mine wont boot.14:48
Tristan|i3DWell it says, could not read from CDrom14:48
Tristan|i3Dhm thats weird14:48
*** iRTermite has quit IRC14:48
*** dubey has quit IRC14:49
*** imsplitbit has joined #openstack14:49
*** Jbain has quit IRC14:49
Tristan|i3DHow do I shutdown the vm after the installation? Kill the kvm process?14:49
*** Jbain has joined #openstack14:50
*** AlanClark has joined #openstack14:50
*** SplasPood has quit IRC15:01
*** DaOmarN has joined #openstack15:01
*** littleidea has quit IRC15:02
*** apebit has quit IRC15:02
*** gnu111 has joined #openstack15:02
*** hub_cap has joined #openstack15:03
*** iRTermite has joined #openstack15:03
gnu111can i run nova-mange commands outside of my nova management node? do i just need to install python-novaclient?15:03
*** ldlework has joined #openstack15:04
*** MarkAtwood has quit IRC15:07
*** dendrobates is now known as dendro-afk15:07
*** deshantm_ has joined #openstack15:09
*** SplasPood has joined #openstack15:09
*** dendro-afk is now known as dendrobates15:09
*** gnu111 has left #openstack15:10
*** zigo has quit IRC15:12
*** faitoo has quit IRC15:13
*** rkukura has joined #openstack15:13
*** deshantm has quit IRC15:13
Tristan|i3Dannegentle, I have the feeling this guide is still for cactus? http://docs.openstack.org/diablo/openstack-compute/starter/content/Uploading_to_OpenStack-d1e1534.html15:14
Tristan|i3D"The last step would be to upload the images to OpenStack Imaging Server glance. The files that need to be uploaded for the above sample setup of Ubuntu are: vmlinuz-2.6.38-7-server, initrd.img-2.6.38-7-server, serverfinal.img" There is no previous mention about extracting and uploading the kernels15:15
*** MarkAtwood has joined #openstack15:16
*** deshantm_ is now known as deshantm15:16
*** robbiew has joined #openstack15:21
Tristan|i3DKiall, can you guide me on how to create a ubuntu image?15:23
Tristan|i3Dthe installation part goes alright, but if I complete the installation and the system reboots it cannot find its boot device. keeps trying to pxe.15:24
Tristan|i3DWhen I launch the same img with a new kvm console it boots ubuntu fine15:24
*** DaOmarN has quit IRC15:24
Kiallwhat does the /etc/fstab look like inside the new image?15:24
Tristan|i3Dlets see15:25
Tristan|i3Dwhen I run nova@server10326:~$ sudo kvm -m 256 -drive file=server.img,if=scsi,index=0,boot=on -boot c -net nic -net user -nographic -vnc :015:25
Tristan|i3Dits boots just fine15:25
Tristan|i3DI can vnc to it15:25
Tristan|i3Dhmmm the UID is still hex15:26
Tristan|i3DIve read in the cactus docs that needed to change15:26
Tristan|i3Dhttp://docs.openstack.org/cactus/openstack-compute/starter/content/Tweaking_etc_fstab-d1e1463.html15:27
Tristan|i3Dthis part is missing in the diablo docs15:27
KiallYea - You need to change that15:28
Tristan|i3Dpfff15:28
KiallI use "LABEL=cloudimg-rootfs", matching the current ubuntu images15:28
KiallAlso .. `rm -f  /etc/udev/rules.d/70-persistent-*`15:29
Tristan|i3Ddone that15:29
*** nphase has joined #openstack15:29
Kiallif you boot it back up, you need to do it again15:29
Tristan|i3Doh15:29
*** esker has left #openstack15:29
*** dendrobates is now known as dendro-afk15:29
*** dendro-afk is now known as dendrobates15:29
Tristan|i3Dnova@server10326:~$ sudo rm -rf /mnt/etc/udev/rules.d/15:30
Tristan|i3D70-persistent-cd.rules  README15:30
Tristan|i3Dits not there15:30
Tristan|i3Dso im good15:30
Tristan|i3Dhttp://docs.openstack.org/cactus/openstack-compute/starter/content/Kernel_and_Initrd_for_OpenStack-d1e1503.html15:30
Tristan|i3Dthis part is not needed anymore?15:30
*** micadeyeye has joined #openstack15:30
*** micadeyeye_ has joined #openstack15:30
KiallNo - I avoid using an external kernel/ramdisk where possible...15:31
*** esker has joined #openstack15:31
Tristan|i3Dok reuploading the image15:32
*** rnorwood has joined #openstack15:33
*** andrewbogott[gon is now known as andrewbogott15:33
*** andrewbogott has joined #openstack15:33
*** lloydde has joined #openstack15:33
*** lloydde has quit IRC15:33
*** lloydde has joined #openstack15:34
Tristan|i3Dcan I create an instance tiny even if the image is 5gb?15:34
Tristan|i3DKiall, nope same issue15:35
Tristan|i3Dstill booting pxe15:35
Tristan|i3Dit might have not been saved to the serverfinal.img15:36
*** russf has quit IRC15:37
KiallDid you remember to pull the partition out of the image? When you install, serverfinal.img is treated as a disk.. the installer then creates a partition inside it..15:38
KiallDid you pull the partition out into its own file?15:38
Tristan|i3Di did. ill start over just to be sure15:38
Tristan|i3Dnow with the step where fstab is being changed15:38
Tristan|i3Doh wait15:39
KiallProbably no need to start over, boot (or mount) serverfinal.img (assuming that's the extract partition), make any changes, and upload15:39
Tristan|i3DThere is a tweaking fstab part in the diablo docs15:39
Tristan|i3Dits hided under the OS installation part15:39
Tristan|i3Dffs15:39
Kiallweird - it says to use "uec-rootfs" even though all the ubuntu images use "cloudimg-rootfs"15:40
*** aspiers has quit IRC15:41
*** katkee has joined #openstack15:41
Tristan|i3Ddoes the bootable flag needs to be 'on' on the single ext4 / partition?15:41
Tristan|i3Dguess not15:42
KiallI would imagine so ;)15:42
Tristan|i3Dit does?15:42
Tristan|i3Dor does not be on bootable15:42
KiallWell.. Yea.. If you want the VM's BIOS to try and boot it, it should be marked as bootable...15:42
Tristan|i3Dthe docs dont mention15:43
Tristan|i3Dso thats why im in doubt :)15:43
*** nRy has quit IRC15:43
*** andrewbogott has quit IRC15:44
*** andrewbogott has joined #openstack15:44
Tristan|i3Dok Kiall im now at this part15:45
Tristan|i3DAfter finishing the installation, relaunch the VM by executing the following command.15:45
*** dendrobates is now known as dendro-afk15:45
Tristan|i3Dbut the thing is. I can only reboot after the install15:45
Tristan|i3Dand the reboot wont boot15:45
*** lloydde has quit IRC15:45
Kiall"wont boot"?15:46
*** natea has joined #openstack15:46
Tristan|i3Das in: it wont find any bootable disk. Tries to pxe15:46
Tristan|i3DI can kill the kvm process ofcourse15:46
KiallDid you amend the fstab before rebooting?15:46
Tristan|i3Dand relaunch then15:46
Tristan|i3Dhow? going in console?15:47
KiallNo, Just wondering if you did.. It would cause issues if you did..15:47
Kiall(at this state_)15:47
Kiallstage*15:47
Tristan|i3Dah no i did not15:47
Tristan|i3DI have to option to continue (and reboot) or go back15:47
Tristan|i3DI the previous 3 times I choose contine15:47
Tristan|i3Dcontinue* and if failed to boot15:48
KiallHonestly can't remember, It's been ages since I build a DIY image...15:48
Tristan|i3Dso i am wondering if I am doing something wrong here15:48
*** Aim__ has joined #openstack15:50
*** esker has quit IRC15:50
*** livemoon has joined #openstack15:50
Tristan|i3Dheres the screen from after the reboot15:51
*** nRy has joined #openstack15:51
Tristan|i3Dhttps://mail.google.com/mail/u/0/?ui=2&view=bsp&ver=ohhl4rw8mbn415:51
Tristan|i3Doh15:51
Tristan|i3Dhmm15:51
*** esker has joined #openstack15:51
Tristan|i3Dhttp://imageshack.us/photo/my-images/836/qemu.png/15:51
Tristan|i3Dwell, Ill kill the kvm process and continue15:52
Tristan|i3Dwell see15:52
*** MarkAtwood has quit IRC15:52
*** dendro-afk is now known as dendrobates15:52
*** cp16net has joined #openstack15:53
*** russf has joined #openstack15:53
*** lloydde has joined #openstack15:54
*** supriya has quit IRC15:54
*** Aim has quit IRC15:54
*** MarkAtwood has joined #openstack15:54
*** asavu has quit IRC15:54
*** andrewbogott has quit IRC15:54
*** andrewbogott has joined #openstack15:54
*** dolphm has joined #openstack15:55
*** esker has quit IRC15:56
Tristan|i3DKiall, shall i proceed with uec-rootfs or cloudimg-rootfs15:57
*** rnirmal has joined #openstack15:57
*** yshh has quit IRC15:57
*** esker has joined #openstack15:57
*** zzed has joined #openstack15:58
*** Yak-n-Yeti has joined #openstack16:03
*** al-maisan is now known as almaisan-away16:04
*** mindpixel has quit IRC16:05
*** zigo has joined #openstack16:07
*** guigui1 has quit IRC16:08
mattrayanyone here working on thefreecloud.org?16:09
*** markwash has quit IRC16:10
*** freeflyi1g has joined #openstack16:10
*** markwash has joined #openstack16:11
Tristan|i3DKiall, started all over, follow every step. Still got the same problem. Any idea?16:11
*** shang has quit IRC16:12
*** reidrac has quit IRC16:12
*** freeflying has quit IRC16:13
*** marrusl has quit IRC16:14
*** lloydde has quit IRC16:15
*** judd7 has joined #openstack16:17
*** lloydde has joined #openstack16:19
mattrayare the API keys somewhere exposed in Dashboard?16:20
annegentlemattray: check out #freecloud16:22
*** blamar_ has quit IRC16:22
mattrayannegentle: thanks!16:22
annegentlemattray: nati2 and jaypipes are the main points of contact, I'm working on it too, though not as an admin. I think you have to request the API key on the command line, it's not in the Dashboard that I know of.16:22
*** marrusl has joined #openstack16:23
mattraygood to know16:23
mattraymaybe someone in there will help me get setup, since I have web access16:23
*** lloydde has quit IRC16:26
*** dev_sa has quit IRC16:27
*** dprince has joined #openstack16:27
*** zigo has quit IRC16:28
*** livemoon has left #openstack16:28
*** bepernoo1 has quit IRC16:30
*** rnorwood has quit IRC16:31
Kiallmattray: the user/pass you use for web access will get you native API access (ie not access the EC2 compatibility API)16:31
*** jeroenhn has quit IRC16:32
*** mies has quit IRC16:32
mattrayKiall: your access key id and secret key?16:32
mattrayKiall: so you use your login as your access key and your pass as your secret key?16:33
KiallOpenStack has 2 APIs.. The "OpenStack API" and the "EC2 Compatible API" .. The user/pass you use for the dashboard is the key and secret for the "OpenStack API"16:33
*** oneiroi has quit IRC16:34
mattrayexcellent, I hadn't used the OpenStack API16:34
mattraylast time I worked with the ec2 api16:34
*** rnorwood has joined #openstack16:34
*** apebit has joined #openstack16:34
*** blamar_ has joined #openstack16:35
*** armaan has joined #openstack16:35
*** cloudgeek has joined #openstack16:35
cloudgeekany alternative for amazon how setup16:37
cloudgeeklike for private VM openstack is super16:37
*** dspano has joined #openstack16:37
*** Vek has joined #openstack16:38
*** MarkAtwood has quit IRC16:40
*** maplebed has joined #openstack16:40
*** spiffxp has joined #openstack16:41
*** DaOmarN has joined #openstack16:42
*** tinova has joined #openstack16:43
*** tinova has left #openstack16:44
*** gray-- has joined #openstack16:45
*** judd7 has quit IRC16:45
Tristan|i3DAnyone else who can help me with my image problem?16:46
*** xthaox has quit IRC16:46
*** gray-- has quit IRC16:47
*** maploin has quit IRC16:48
*** dendrobates is now known as dendro-afk16:50
*** uksysadmin has quit IRC16:50
*** russf_ has joined #openstack16:51
*** vladimir3p has joined #openstack16:53
*** russf has quit IRC16:53
*** russf_ is now known as russf16:53
*** lloydde has joined #openstack16:55
*** dxd828 has quit IRC16:57
*** KyleMacDonald has joined #openstack16:57
*** littleidea has joined #openstack16:59
*** rnorwood has quit IRC16:59
*** rnorwood1 has joined #openstack17:01
*** natea_ has joined #openstack17:03
*** rods has joined #openstack17:05
*** esker has quit IRC17:06
*** natea has quit IRC17:07
*** natea_ is now known as natea17:07
*** raimaz has quit IRC17:08
*** natea has quit IRC17:08
*** andrewsmedina has quit IRC17:08
*** natea has joined #openstack17:08
*** hub-cap has joined #openstack17:10
*** cloudfly has joined #openstack17:11
*** Vivek has quit IRC17:12
*** katkee has quit IRC17:13
*** hub_cap has quit IRC17:14
*** hub-cap is now known as hub_cap17:14
*** mattray has quit IRC17:17
*** Ramonster has quit IRC17:18
*** armaan has left #openstack17:18
*** russf has quit IRC17:18
*** nati2 has joined #openstack17:19
*** littleidea has quit IRC17:19
*** Tristan|i3D has quit IRC17:20
*** littleidea has joined #openstack17:20
*** dxd828 has joined #openstack17:21
*** dxd828 has quit IRC17:26
*** hub_cap has quit IRC17:27
*** llang629 has joined #openstack17:27
*** hub_cap has joined #openstack17:27
*** llang629 has left #openstack17:27
*** wonk has quit IRC17:28
*** warik has joined #openstack17:29
*** dolphm has quit IRC17:32
*** thingee1 has joined #openstack17:33
*** dolphm has joined #openstack17:34
*** nand_nanda_21 has joined #openstack17:34
*** thingee has quit IRC17:35
*** chaosdonkey has quit IRC17:35
*** cmagina has quit IRC17:38
*** cmagina has joined #openstack17:39
*** mies has joined #openstack17:39
*** dxd828 has joined #openstack17:40
*** martianixor has quit IRC17:40
*** cloudgeek has quit IRC17:42
*** bepernoot has joined #openstack17:43
*** cloudgeek has joined #openstack17:43
*** j^2 has quit IRC17:45
*** j^2 has joined #openstack17:45
*** corXi has left #openstack17:46
*** shall_ has quit IRC17:48
*** bepernoot has quit IRC17:48
*** chaosdonkey has joined #openstack17:48
*** cloudgeek has left #openstack17:52
*** pixelbeat has quit IRC17:53
*** jdurgin has joined #openstack17:54
*** chaosdonkey has quit IRC17:56
*** ohnoimdead has joined #openstack17:59
*** hub_cap has quit IRC17:59
*** mtaylor has quit IRC17:59
*** mtaylor has joined #openstack17:59
*** ChanServ sets mode: +v mtaylor17:59
*** hub_cap has joined #openstack18:00
*** judd7 has joined #openstack18:01
*** mdomsch_ has quit IRC18:01
*** Ryan_Lane has quit IRC18:01
*** nand_nanda_21 has quit IRC18:02
*** derekh has quit IRC18:03
*** Ryan_Lane has joined #openstack18:03
*** bepernoot has joined #openstack18:04
*** rocambole has quit IRC18:04
*** spiffxp has quit IRC18:05
*** russf has joined #openstack18:08
*** gyee has joined #openstack18:09
*** arBmind has quit IRC18:09
*** davepigott has joined #openstack18:13
davepigottjamespage: ping?18:13
*** russf has quit IRC18:13
*** natea has quit IRC18:14
*** bepernoot has quit IRC18:14
*** natea has joined #openstack18:14
davepigottAnybody? I have a problem. I've installed openstack and got swift running as per the Beginner's Guide (plus some guidance from jamespage). When I  get swift up, I do the suggested "curl" command and the response I get is:  "This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to18:15
davepigottWhat have I got wrong and where?18:15
dolphmdavepigott: what's the suggested curl command?18:16
davepigottdolphm: curl -v -H ’X-Storage-User: admin:admin’ -H ’X-Storage-Pass: admin’ http://127.0.0.1:8080/auth/v1.018:17
davepigottIt's just designed to see that swift is running properly18:17
davepigottWhich, obivously, it isn't. :)18:18
dolphmdavepigott: where did you set up the admin user? i.e. which authentication implementation18:19
davepigottdolphm: I'm not sure. I've been pretty much following the script, as it were. Weirdly this worked before. We had a crash and I had to reconfigure things again18:20
davepigottdolphm: And I don't see anywhere in the document that actually sets up an admin user18:21
*** chaosdonkey has joined #openstack18:22
*** shevek_ has quit IRC18:22
*** mszilagyi has joined #openstack18:23
dolphmdavepigott: what happens when you `curl http://localhost:35357/v2.0/`18:23
davepigottdolphm: I get "couldn't connect to host"18:24
dolphmdavepigott: is keystone installed? startup should just be `keystone` if it's on your path18:25
notmynamedavepigott: do you have a link to the guide you followed to set up swift?18:26
davepigottdolphm: No. It's not installed18:26
dolphmdavepigott: you might be able to configure swift without it, but that's probably the problem18:27
davepigottnotmyname: http://cssoss.files.wordpress.com/2011/11/openstackbookv2-0_csscorp.pdf18:27
notmynameno, keystone is not required in any way fro swift18:27
davepigottdolphm: I didn't have it installed previously when it worked18:27
notmynamedavepigott: so I can assume your proxy config matches the one found in section 2.2.7.4 of that document?18:28
davepigottnotmyname: Yep18:29
notmynamedavepigott: the important part is the [filter:tempauth] section18:29
notmynamedavepigott: cool18:29
davepigottnotmyname: Ah yes. That's where the admin user gets set up. I see. :)18:30
notmynamedavepigott: the "admin" user you are using is set in the "user_admin_admin" line18:30
notmynameya18:30
*** darraghb has quit IRC18:30
dolphmcool18:30
notmynamedavepigott: you said you had a crash? everything is started and working?18:31
notmynameincluding memcached?18:31
davepigottnotmyname: It is now. The crash was because of a network config error. I installed memcached. Let me check if it's running18:31
notmynamedavepigott: actaully, we can simply look in the logs to see what's going on18:31
davepigottnotmyname: True. Which one of the many? ;)18:32
notmynamedavepigott: look in syslog (assuming you haven'r redirected it somewhere--I'm not sure what that guide does)18:32
*** chaosdonkey has quit IRC18:33
davepigottnotmyname: Just so you know, it says it can't resolve admin:18:33
davepigott* getaddrinfo(3) failed for admin:8018:33
davepigott* Couldn't resolve host 'admin'18:33
*** andrewsmedina has joined #openstack18:33
*** spiffxp has joined #openstack18:34
*** vincentricci has joined #openstack18:34
notmynamedavepigott: try this `curl -i -H "X-Auth-User: admin:admin" -H "X-Auth-Key: admin" http://127.0.0.1:8080/auth/v1.0`18:35
notmynameslightly modified version of your command18:35
davepigottHTTP/1.1 200 OK18:35
davepigottX-Storage-Url: http://127.0.0.1:8080/v1/AUTH_admin18:35
davepigottX-Storage-Token: AUTH_tk1c2bc30e07d749b49ec4aa57430a106a18:35
davepigottX-Auth-Token: AUTH_tk1c2bc30e07d749b49ec4aa57430a106a18:35
davepigottContent-Length: 018:35
davepigottDate: Thu, 09 Feb 2012 18:35:24 GMT18:35
notmynamedavepigott: so it worked. now use the X-Auth-Token value for subsequent requests: `curl -i -H "X-Auth-Token: AUTH_tk1c2bc30e07d749b49ec4aa57430a106a" http://127.0.0.1:8080/v1/AUTH_admin18:36
notmynameand have fun18:36
davepigottnotmyname: Thanks! But why did that command fail when it worked previously? And the response from the original was a lot longer (as per the doc)?18:37
notmynamenot sure. you should use X-Auth-Token/X-Auth-User/X-Auth-Key instead of the X-Storage-* ones. those are depricated18:38
davepigottnotmyname: Ah. I did do an apt-get upgrade. Could that explain it?18:39
notmynamedavepigott: it shouldn't have affected anything with swift. we haven't (ever) made any backwards-incompatible changes18:39
davepigottnotmyname: Oh. OK. Put it down to weirdness18:40
davepigottnotmyname: It worked! Brilliant. Thanks a lot!18:41
wariknotmyname: hi! have you worked on a multiple nodes swift env. before?18:42
notmynamewarik: heh, ya :-) I'm on the cloud files team at rackspace. we run several swift clusters with thousands of nodes18:43
warikoh! well! Hello sir :)18:43
*** dwcramer has joined #openstack18:44
notmynamethanks for the chuckle ;-)18:44
notmynamewarik: something I can help with?18:44
wariknotmyname :) I am not suck or anything, I will deploy soon a 5 nodes swift cluster and I have a question regarding the zones18:45
notmynameok18:45
davepigottnotmyname: Quick question...18:45
warikok, what strategy should I use to create my zones? should I have multiple XFS partitions  on each of my nodes and assign these partitions to a zones OR only one big partition divided in multiples one?18:46
warik(although, i am not sure that is really clear...)18:47
*** russf has joined #openstack18:47
*** chaosdonkey has joined #openstack18:47
davepigottnotmyname: In the document it says in nova.conf put --glance_api_servers to point at the main node, but I can't find any documentation on that. And in one place it seems to imply it should point at the other node.18:48
warikin another words, what do you recommend in term of "zoning" and "partitioning" on the storage node18:48
warikthanks18:48
davepigottnotmyname: No wait. It seems to be saying that the main server points at the compute node and the compute node points at the main server18:49
*** qazwsx has joined #openstack18:49
*** marrusl_ has joined #openstack18:49
davepigottnotmyname: 2.2.6.2 and 2.3.4 sections in the doc18:49
*** marrusl has quit IRC18:49
*** marrusl has joined #openstack18:50
davepigottnotmyname: What's the real story on that parameter?18:50
*** marrusl has quit IRC18:50
*** marrusl_ is now known as marrusl18:50
notmynamewarik: zones should be as distinct as your deployment allows. it's probably a bad idea to slice a single box into multiple zones (normally). with 5 servers, I'd set one zone per server18:51
notmynameor you could add one more box and do 3 zones of 2 servers18:52
notmynamewarik: the point is that each of the three replicas will never be in the same zone. so the zone becomes your failure isolation. set it as high as you can (cabinet, DC room, DC, region, etc)18:52
wariknotmyname: ok! that makes sense!  thanks!18:53
notmynamewarik: oh, and you should only have one filesystem partition per drive18:53
notmynamewell, at least with swift18:53
warikalright!18:54
notmynamedavepigott: ahhh, sorry. I don't know enough about nova to help with configuration18:54
davepigottnotmyname: No worries. Thanks a lot for all your help!18:54
notmynamewarik: splitting a drive into 2 mount points (ie 2 partitions) adds complexity and doesn't get you any extra availability or durability (since it's still the same physical disk)18:55
wariknotmyname: does a configuration, sda1 (system files) sda2 (XFS for swift) works well? Or only one big partition?18:55
warikok!18:55
*** chaosdonkey has quit IRC18:56
notmynamewarik: we run our nodes with an OS drive (or volume, could be RAID'ed drives) and then a bunch of disks for swift data (normally no RAID, but JBOD)18:56
*** pixelbeat has joined #openstack18:56
wariknotmyname:  interesting18:57
notmynamewarik: how much storage will your cluster have? (if you can share that)19:00
warikthat will be 5 servers 1.7T each  (but probably 3 servers for now)19:01
*** gray-- has joined #openstack19:01
notmynamewarik: there's somthing else I should point out about swift zones19:02
notmynamewarik: you grow your swift cluster by either adding to zone equally or adding a new zone the same size as the existing ones (ie zones need to be the same size)19:02
*** AndrewWeiss has joined #openstack19:03
*** gray-- has quit IRC19:03
wariknotmyname: that is really interesting! what happens if you want to add more space to your zones? you have to delete the zones and re-create them ?19:03
*** jedi4ever has quit IRC19:04
zynzelwarik: you can add new node to zone with higher weight (for example 200)19:05
jcapelwhy do the zones need to be the same size? we have 5 zones and 6 servers, although the balance is not perfect it works well enough19:05
zynzelrebalance ring and remove 'small' node, after replication19:05
warikzynzel: got it! thanks!19:06
*** rnorwood1 has quit IRC19:06
zynzelbut also you can have mixed setup, with small and big nodes in zone.19:07
zynzelas jcapel says.19:07
*** arBmind has joined #openstack19:07
warikzynzel: that make sense, specially if you need to add more space19:07
*** ejat has joined #openstack19:07
warikthanks guys!19:08
*** imsplitbit has quit IRC19:08
zynzelnp ;)19:08
jcapelwe have zones set up per-switch btw19:08
AndrewWeisshey guys, was wondering if anyone could help me out with a swift/keystone authentication issue19:09
jcapeland we found free rack space in the datacenter here-and-there to put the swift cluster nodes in, nicely spread out19:09
*** Aim__ is now known as Aim19:10
*** clauden has quit IRC19:10
notmynamewarik: actually, that's not quite right19:10
notmynamezynzel: ^19:10
notmynamewarik: zynzel: the weight is set on drives, not zones19:10
*** clauden_ has joined #openstack19:10
zynzeljcapel: what hardware you use on storage node? i have to choose between ssd and sas hdd.19:10
jcapelfor the storage nodes we use 2TB SAS disks19:11
jcapelbut we have container and account services on corsair force 3 SSDs19:11
notmynamejcapel: if the zones are the same size, then it will use the space more efficiently19:11
zynzelnotmyname: yes, but if you need 'resize' zone, you can add new node with big weight19:11
jcapelthe 7200rpm disks could not keep up with the container and account i/o19:11
zynzeland after replication, remove small nodes.19:11
notmynamejcapel: +1 to SSDs for account+container19:12
jcapelyeah it was just not doable without SSDs19:12
jcapelwe host a lot of very small files19:12
jcapelwe're experimenting with the flashcache module for the object servers19:13
notmynamezynzel: ya, but then if you have 3 zones where one is twice as big, you'll "fill up" before the big zone gets full19:13
jcapelif you plan on keep adding servers as the cluster grows I don't think that's big deal19:13
zynzelnotmyname: imho any change in one zone, should lead to change in every zone19:14
notmynamezynzel: agreed19:14
*** dolphm has quit IRC19:14
praefecthey guys just a quick question, if openstack is using virtio for disks, why doesn't the qemu-kvm command line (ps -ef) reflect that... it says if=none and not if=virtio19:14
praefecthow can I confirm it uses virtio?19:15
praefectfor disks19:15
warikzynzel:  how do you defining the "weight" ?19:15
zynzelwarik: swift-ring-builder ring add zNUM-IP:PORT/DEV_COM weight19:16
warikzynzel:  got it19:16
zynzelor simply swift-ring-builder ring set_weight SEARCH_PATERN NEW_WEIGHT19:16
termieBasTichelaar: the work is now being done on the redux branch of keystone19:16
praefectin the diablo doc, the section about windows image says: start your instance with kvm (blablabla) if=virtio... but still, on a running openstack node, all the qemu-kvm lines have if=none19:16
termieBasTichelaar: and ayoung is working on ldap stuff19:16
wariknotmyname: another one… how should I split up the container / object node?19:17
*** phschwartz|rem has joined #openstack19:18
phschwartz|remAfternoon19:18
*** JStoker has quit IRC19:19
*** mnewby has joined #openstack19:22
*** massiverobok has joined #openstack19:23
*** JStoker has joined #openstack19:24
*** chasing`Sol has joined #openstack19:24
*** gray-- has joined #openstack19:25
*** KyleMacDonald has quit IRC19:25
*** marrusl has quit IRC19:26
*** shevek_ has joined #openstack19:27
*** marrusl has joined #openstack19:28
*** thingee1 has quit IRC19:28
* davepigott notmyname: Any indeed anyone: I got everything back up and running and then when I started an instance and tried to attach to it via ssh it wouldn't. I assume I should use ssh myusername@instance.ip.address. Then I read that I had to do a "euca-authorize -P tcp -p 22 default", which I did on the main cloud server, and nova-network died. When I try to restart it, I get this in the log file: 19:30
davepigott(nova): TRACE: Command: sudo vconfig add br100 10019:30
davepigott(nova): TRACE: Exit code: 319:30
davepigott(nova): TRACE: Stdout: ''19:30
davepigott(nova): TRACE: Stderr: 'ERROR: trying to add VLAN #100 to IF -:br100:-  error: No such device\n'19:30
*** nilsson_ is now known as nilsson19:30
davepigottThis is where I was before when it all went wrong, because I added br100 as a network bridge and then the server just died.19:31
davepigottAny ideas?19:31
mjforkdavepigott: look in nova.conf, what is your vlan_interface19:32
mjforkis it br100?19:32
*** bepernoot has joined #openstack19:33
davepigottmjfork: Yep19:36
mjforkthat needs ot be an eth* device19:36
mjforkthat doc is broken19:36
mjforkopened a bug report, can't find it now19:36
davepigottmjfork: ****!19:36
davepigottWhich one? eth1 or eth019:36
mjforkdepends19:36
mjforkwhich do you want the vlan on19:36
davepigotteth119:36
davepigottanswered my own question. :)19:36
mjforkyes19:37
*** uncleofthestick has quit IRC19:37
mjforkso, you need to update the DB too19:37
mjforkget into my sql DB19:37
mjforkSELECT vlan_interface FROM networks;19:37
davepigottUsing postgres19:38
mjforkok19:38
mjforkeither way19:38
davepigottok19:38
*** ahasenack has quit IRC19:38
*** ahasenack has joined #openstack19:38
*** bepernoot has quit IRC19:38
*** andrewbogott is now known as andrewbogott[gon19:40
davepigottmjfork: Says relation networks doesn't exist19:41
*** gilbakrunk has joined #openstack19:41
mjfork\dt19:42
mjforkor, it may be in a schema19:42
mjfork\dn i think?19:42
*** bepernoot has joined #openstack19:42
*** bepernoot has quit IRC19:43
davepigottSays there's one schema called public. Doesn't work if I say 'select vlan_interface from public'19:44
*** xthaox has joined #openstack19:44
*** ppradhan has joined #openstack19:45
*** whenry has quit IRC19:45
mjfork\dt networks19:46
*** ejat has quit IRC19:46
mjfork\dt show anything?19:46
mjforkare you in the right DB?19:46
davepigottnot a seasoned postgres user so no idea. :)19:47
mjforkso why postgres over mysql :-P19:48
mjforkselect datname from pg_database;19:48
mjforkrun that19:48
davepigottmjfork: It's ok we worked it out. It's \c. Postgres because that's the db the team use that the team who are going to use this already know19:49
davepigottSo do I go into nova as the db?19:49
mjforkno sure, so are you connected to the right DB?19:50
zynzeldavepigott: \l19:50
davepigottmjfork: I'm connected to the nova db, and it says vlan_interface column doesn't exist19:50
zynzeldavepigott: and then probably \c nova19:50
mjfork\d networks19:50
mjforkdavepigott: what do you see for *_interface columns?19:51
davepigottmjfork: Only bridge_interface19:51
*** mattstep has quit IRC19:52
mjforkok, thats it19:52
mjforkSELECT bridge_interface FROM networks;19:52
mjforkwhat do you see19:52
*** mattstep has joined #openstack19:52
*** lxu has quit IRC19:52
davepigottbr10019:52
mjforkok19:53
mjforkUPDATE networks SET bridge_interface = 'eth1';19:53
*** al has quit IRC19:53
mjforkrestart nova-network19:54
*** al has joined #openstack19:54
davepigottmjfork: ok...19:54
ppradhanmjfork: a question about instance images19:55
ppradhanmjfork: I have OS images in swift.19:55
davepigottmjfork: Yay! It's staying up!!19:55
davepigott\o/19:55
davepigottmjfork: Thanks a lot. :)19:55
mjforkgood19:55
mjforkppradhan: whats up19:55
notmynamewarik: sorry for the delay. we had a fire alarm19:56
wariknotmyname: no problem19:56
ppradhanmjfork: hello19:56
notmynamewarik: how to split containers and objects?19:56
ppradhanmjfork: mjfork: is it possible to use swift containers for storing computer node disk images which can be seen at /var/lib/nova/instances/_base19:56
mjforkppradhan: no, not running VMs, just images which can be downlaoded to the location19:56
ppradhanmjfork: so in prdouction what is recommened?19:57
notmynamewarik: in what way do you mean how to split them?19:57
ppradhanmjfork: i mean whats the typical method19:57
wariknotmyname: yes, I just want to setup this swift cluster the most efficient way19:57
mjforkppradhan: what do you mean? in production you run your VM on block devices like hardrives19:58
*** russf has quit IRC19:58
mjforkthat directory cuold be NFS if you want to do live migration19:58
*** warik_ has joined #openstack19:58
*** albert23 has joined #openstack19:59
notmynamewarik: as always, it depends on your use case. will you have large containers? small objects? large objects? mostly GETs? mostly PUTs?19:59
notmynamewarik: however...19:59
ppradhanppradhan: ok19:59
ppradhanmjfork: ok19:59
warik_notmyname: that will be mostly get and the object size will be between 5go up to 40go20:00
notmynamewarik: long-term scalability in swift is helped if you run each part on optimized hardware (ie instead of "good enough" hardware for everything)20:00
*** bepernoot has joined #openstack20:00
*** Leseb has joined #openstack20:00
notmynamewarik: side note, you're aware of swift's 5GB/object limit with the ability to support larger file sizes by using a large-object manifest?20:00
warik_I read about that yes20:01
*** warik has quit IRC20:02
*** warik_ is now known as warik20:02
notmynamewarik: for a large-scale cluster, I'd recommend running on 2 hardware SKUs: a) proxy + object servers optimized for CPU+RAM+dense storage and b) account+container servers optimized for IOPS(+RAM)20:02
notmynamewarik: but you're only talking about <10TB, right?20:03
*** AndrewWeiss has quit IRC20:03
*** apebit has quit IRC20:03
*** ahasenack has quit IRC20:03
wariknotmyname: yes, the images/files will be less than 10TB20:03
warikthe max size will be maybe 50 go20:04
notmynamewarik: how many objects do you expect to have in one container?20:04
wariknotmyname: that I don't really know at this point20:05
*** bepernoot has quit IRC20:05
*** AndrewWeiss_ has joined #openstack20:05
warikI don't have a really good idea of the final infrastructure20:06
notmynamewarik: can you realistically keep it below, say, 1 million objects per contianer?20:06
wariknotmyname: sure20:06
*** al has quit IRC20:07
*** al has joined #openstack20:07
notmynamewarik: do you need HA for the proxies (ie 100% uptime)?20:07
*** al has quit IRC20:07
notmyname(obviously you want as good as you can get, but better == more $)20:07
*** al has joined #openstack20:07
gilbakrunkHi. Question: Since non nova-volume disks (i.e disks that reside on the compute nodes) are ephemeral, I don't see any disadvandage (from a safety point of view) to  using cache=writeback. Is this correct?20:08
*** rnirmal has quit IRC20:08
notmynamewarik: and what do you expect your req/sec to be?20:09
wariknotmyname: I thought the multiple zone will the "ha" for now20:09
*** pfibiger has joined #openstack20:09
*** AndrewWeiss_ has quit IRC20:09
gilbakrunkI mean, why wouldn't I use cache=writeback with qcow2 images instead of default of writethrough20:09
*** AndrewWeiss has joined #openstack20:09
*** al has quit IRC20:10
*** al has joined #openstack20:10
ppradhanmjfork: what flag should we use to change this defaul location /var/lib/nova/instances/_base20:10
ppradhan?20:10
notmynamewarik: yes, that will ensure availability and durability for the storage nodes. for the proxies, though (which don't have to be running on the same box), you could have either just one or multiple behind a load balancer. just one is simpler, but you don't have the failover20:10
mjforkppradhan: doin't change it, just symlink or mount the storage you want there20:11
ppradhanmjfork: ok20:11
wariknotmyname: oh yes, i have the second proxy server in mind20:11
warikbut for now, I will just setup on20:11
warikone20:11
*** willaerk has joined #openstack20:12
wariknotmyname: I have to read a little more about all the components.. :)20:13
warikdo you think,  1 proxy + 2 or 3 storage servers are fair enough to start?20:13
notmynamewarik: for a small HA cluster (which is what you are describing), you could probably pretty easily get away with running everything on 4 boxes. then load balance however many of the proxies you need (2-all 4)20:13
notmynamewarik: you need 3 minimum, and 4+ is recommended20:14
warikfor the storage node you mean? 4 storages and 1 proxy20:14
*** wonk has joined #openstack20:15
notmynamewarik: no. have 5 boxes. run everything (proxy+account+container+object) on each of them. then run your load balancer configured to talk to 2-5 of the proxies (depending on how much traffic you have)20:16
warikoh!20:16
warikthat will be 5 zones then20:17
notmynamewarik: right20:17
warikok20:17
notmynamewarik: probably for your use case you won't need to split the account+container servers20:17
notmynamewarik: of course, you know your use case better than me20:17
wariknotmyname :) sure but I really appreciate your help20:18
notmynamewarik: if you did want to separate them, you'd need at least 3, and that gets proportionally very expensive when your only talking about 10TB total20:18
*** DaOmarN has quit IRC20:20
*** dolphm has joined #openstack20:20
wariknotmyname: that makes sense20:20
*** Remco_ has joined #openstack20:21
*** Remco_ has quit IRC20:22
notmynamewarik: FWIW, you should be able to support around 1K req/sec/proxy in a "normal" (ie non-optimized) setup20:23
*** mattray has joined #openstack20:23
*** dspano has quit IRC20:23
warikthat's fair!  Regarding the load balancer for the proxies? do you have recommandation20:25
warik*recommendation ?20:25
gilbakrunkAnyone? Qcow2, writeback vs writethrough for ephemeral storage?20:26
notmynamewarik: zeus (commercial, called something else now; it's what we use at rackspace) or pound (free). you need your load balancer to terminate SSL. nginx is a bad choice because it spools to disk20:26
*** davepigott has quit IRC20:27
warikalright! i'll keep that in mind!20:27
wariknotmyname: that's a ton for your time!20:28
notmynamewarik: np. once you get your cluster up and running, I'd love to be able to read about it online20:28
*** russf has joined #openstack20:28
*** xthaox has quit IRC20:28
wariknotmyname: I'll let you know how it goes for sure!20:28
notmynamecool20:28
*** almaisan-away is now known as al-maisan20:28
warik:)20:29
*** dendro-afk is now known as dendrobates20:29
uvirtbotNew bug: #928855 in openstack-manuals "Install glance registry conf file fix" [High,Confirmed] https://launchpad.net/bugs/92885520:30
*** pixelbeat has quit IRC20:30
*** ergalassi has joined #openstack20:31
*** davepigott has joined #openstack20:31
*** Ryan_Lane has quit IRC20:32
*** natea_ has joined #openstack20:34
*** archit_ has joined #openstack20:35
*** aculich has joined #openstack20:35
*** Archit has quit IRC20:35
*** natea has quit IRC20:36
*** natea_ is now known as natea20:36
*** dspano has joined #openstack20:36
*** Gordonz has joined #openstack20:38
*** imsplitbit has joined #openstack20:39
*** AndrewWeiss_ has joined #openstack20:40
*** russf has quit IRC20:40
*** russf has joined #openstack20:41
*** jperkin has quit IRC20:42
*** russf has quit IRC20:43
gilbakrunkDoes anyone use cache=writeback with QCow2/KVM?20:43
*** AndrewWeiss has quit IRC20:44
*** AndrewWeiss_ is now known as AndrewWeiss20:44
*** jperkin has joined #openstack20:45
*** tryggvil_ has joined #openstack20:45
*** russf has joined #openstack20:46
*** ergalassi has quit IRC20:48
*** Vince_ has joined #openstack20:49
*** vincentricci has quit IRC20:51
*** davepigott has quit IRC20:51
*** esker has joined #openstack20:53
*** ekaleido has joined #openstack20:53
*** judd7 has quit IRC20:55
*** russf has joined #openstack20:56
*** LinuxJedi has quit IRC20:56
*** BasTichelaar has quit IRC20:59
*** erics has joined #openstack20:59
*** Darkskill has joined #openstack21:00
*** erics has quit IRC21:00
*** rkukura has quit IRC21:01
*** hggdh has quit IRC21:01
*** russf has quit IRC21:02
*** mnewby_ has joined #openstack21:03
*** rnorwood has joined #openstack21:03
*** mnewby has quit IRC21:04
*** mnewby_ is now known as mnewby21:04
*** hggdh has joined #openstack21:04
*** whenry has joined #openstack21:05
*** hub_cap has quit IRC21:06
*** hub_cap has joined #openstack21:07
*** mutex has joined #openstack21:08
*** jog0 has joined #openstack21:08
*** dprince has quit IRC21:09
*** ekaleido has left #openstack21:09
*** phschwartz|rem has quit IRC21:11
*** phschwartz|rem has joined #openstack21:11
*** russf has joined #openstack21:12
*** rnorwood has quit IRC21:12
*** russf has quit IRC21:12
*** chaosdonkey has joined #openstack21:13
*** dolphm has quit IRC21:13
*** russf has joined #openstack21:14
*** andrewsmedina has quit IRC21:21
*** deshantm has quit IRC21:21
*** davepigott has joined #openstack21:22
*** hub-cap has joined #openstack21:23
*** dolphm has joined #openstack21:23
*** chaosdonkey has quit IRC21:24
*** russf has quit IRC21:24
*** dolphm has joined #openstack21:24
*** praefect has quit IRC21:25
mutexHi, I'm just installed the dodi deployer on a machine21:25
*** russf has joined #openstack21:25
*** apebit has joined #openstack21:25
*** deshantm has joined #openstack21:25
*** hub_cap has quit IRC21:26
*** hub-cap is now known as hub_cap21:26
mutexI guess I am a bit confused about what a 'proposal' is21:26
mutexI am trying to get a simple nova,swift,glance stack installed on one piece of hardware and do some provisioning tests21:26
*** Remco_ has joined #openstack21:28
*** mattray1 has joined #openstack21:31
*** mattray has quit IRC21:33
davepigottnotmyname: mjfork: anybody. I now have openstack up and running, and I did the magic runes to allow icmp and tcp, but I can't ping or ssh onto the instance that is running (from client or server). Any idea what I might have missed? I assume the magic runes are run on the server and not the client, since the client can already ssh21:34
*** LinuxJedi has joined #openstack21:34
*** russf has quit IRC21:36
*** arBmind has quit IRC21:36
davepigottjamespage: ^21:37
*** Darkskill260 has joined #openstack21:37
*** rnorwood has joined #openstack21:37
*** natea_ has joined #openstack21:37
*** natea has quit IRC21:37
*** natea_ is now known as natea21:37
*** warik has quit IRC21:38
*** ibarrera has joined #openstack21:38
mjforkdavepigott: connect to the isntance via VNC and see if it actually got an IP21:39
*** warik has joined #openstack21:40
mjforkannegentle: did i see a call for translators yesterday some place?21:40
davepigottmjfork: If I do a describe-instance it says it has one. Is that not the same?21:40
*** Darkskill has quit IRC21:40
mjforkno21:40
davepigottmjfork: How do I vnc into it?21:40
mjforkthat was allocated by openstack, but doesn't mean the machine sucessfully picked it up21:40
davepigottAh right21:41
mjforkvirsh list21:41
mjforkvirsh dumpxml <instance id> | grep vnc21:41
*** phschwartz|rem has quit IRC21:41
*** wilmoore has joined #openstack21:41
dspanodavepigott: When that happens to me, there's usually something in the nova-network logs.21:42
*** Ryan_Lane has joined #openstack21:42
mjforkor nova-compute logs as well21:42
*** phschwartz|rem has joined #openstack21:42
davepigottdspano: mjfork OK. I'll play with it. Have to run to a meeting. Thanks for your help!21:42
*** warik has quit IRC21:43
*** davepigott has quit IRC21:43
mjforknp21:43
*** sandywalsh has quit IRC21:43
*** warik has joined #openstack21:43
*** littleidea has quit IRC21:44
gilbakrunkHi, does anyone use cache=writeback with QCow2/KVM? I can't think of a reason why not to do that for ephemeral storage. Thanks in advance.21:44
*** littleidea has joined #openstack21:44
*** jog0_ has joined #openstack21:45
*** apebit has quit IRC21:45
aliguorigilbakrunk, it all depends on what your expectations of "ephemeral" are.  if it's truly ephemeral, you might as well use cache=unsafe21:45
gilbakrunkaliguori: I've never heard of cache=unsafe. This is a valid cache option with qcow2?21:46
*** ewindisch_ has joined #openstack21:47
aliguoriyes21:47
*** littleidea has quit IRC21:47
gilbakrunkaliguori: With Openstack, my expectations with ephemeral is that I lose all data if in case of a failure. Nothing more. Nothing less.21:47
*** littleidea has joined #openstack21:48
*** ewindisch has quit IRC21:48
*** ewindisch_ is now known as ewindisch21:48
aliguorigilbakrunk, then cache=unsafe would be the fastest option21:48
*** jog0 has quit IRC21:48
aliguorigilbakrunk, although one thing to keep in mind, cache=writeback/unsafe allows a guest to generate a lot of dirty page cache entries21:48
gilbakrunkaliguori: Yup thought of that. More memory pressure on the host21:49
aliguoriand dirty-page cache entries is a globally controlled setting, so one guest can potentially generate lots of memory pressure21:49
aliguoriyeah21:49
*** jog0_ has quit IRC21:49
gilbakrunkaliguori: I think it's worth a try though.21:49
aliguorii've seen OOMs from this before21:49
gilbakrunkThanks for the heads up21:49
aliguoribut it's under very synthetic, heavily loaded environments21:49
aliguorinp21:50
gilbakrunkThat's similar to the POC workload that I've got right now21:50
gilbakrunkAnd BTW: Should I have other expectations from ephemeral?21:50
*** llang629 has joined #openstack21:51
*** davepigott has joined #openstack21:53
*** mattstep has quit IRC21:54
*** justinsb has quit IRC21:55
gilbakrunkaliguori: Thanks for your help. It's much appreciated.21:55
*** mattstep has joined #openstack21:55
*** Pr0toc0l has joined #openstack21:55
Pr0toc0lhello all...21:55
*** natea has quit IRC21:55
Pr0toc0lquick question on nova-compute and iptables....is a flat file used to store the iptables rules when ip addresses are associated to an instance?21:56
*** russf has joined #openstack21:57
*** rkukura has joined #openstack22:00
*** miclorb_ has joined #openstack22:00
*** natea has joined #openstack22:00
*** jog0 has joined #openstack22:01
*** russf has quit IRC22:01
*** mrjazzcat has quit IRC22:02
*** jog0 has quit IRC22:02
*** russf has joined #openstack22:02
*** littleidea has quit IRC22:03
*** littleidea has joined #openstack22:05
*** apebit has joined #openstack22:06
*** andrewsmedina has joined #openstack22:08
*** qazwsx has quit IRC22:09
*** marrusl has quit IRC22:10
*** marrusl has joined #openstack22:11
*** nati2 has quit IRC22:11
*** russf has quit IRC22:12
*** blamar_ has quit IRC22:12
*** dendrobates is now known as dendro-afk22:13
davepigottmjfork: dspano: The network log seems to be hanging at "Attempting to grab semaphore "get_dhcp" for method "_get_dhcp_ip"…". In the log I see messages of the type: "WARNING nova.network.manager [-] No fixed IPs for instance x", but not in the case of every instance. The nova-compute log doesn't seem to change when I start and stop instances.22:15
mjforkare you out of fixed ips?22:16
*** dendro-afk is now known as dendrobates22:16
davepigottmjfork: I have yet to allocate one IP, so unless I have zero, no22:17
davepigottmjfork: And it's floating in this instance22:18
*** stuntmachine has quit IRC22:18
*** martine has quit IRC22:18
*** sandywalsh has joined #openstack22:18
mjforkveirfy you have an unalllocated floating ip22:19
mjforkdo you have autoassign on? seem to recall that being broken in diablo22:19
davepigottmjfork: Hmm. Where's that set?22:19
*** massiverobok has quit IRC22:19
davepigottmjfork: How do I get the allocation/free list?22:20
davepigottmjfork: Also, the instances always come up pending. I have to reboot them to get them to a running state22:21
*** llang629 has left #openstack22:23
*** AndrewWeiss has quit IRC22:23
*** al-maisan is now known as almaisan-away22:24
mjforkdavepigott: sorry, neeed to run now. i believe the reboot was resolved a hile back22:24
dspanoI had to go into the fixed_ips table and set the reserved and allocated ips to false. Not sure if that was the proper way to do things. Wouldn't necessarily do it that way in production, but I'm still testing.22:24
mjforkare yuo on standard PPA packages?22:24
davepigottmjfork: yes22:24
davepigottmjfork: No worries. You've been a great help22:25
davepigottdspano: Where's that done?22:25
dspanodavepigott: In the database.22:25
mjforkstandard PPA packages have lots of problems22:26
davepigottdspano: Oh. Right22:26
mjforki strongly recommand Kiall's PPA respotiory at https://github.com/managedit/22:26
*** dwcramer has quit IRC22:26
davepigottWhen I do nova-manage floating list I get a list 192.168.2.x. The IP it says when I look at the instance is 192.168.4.x - that's odd22:26
davepigottmjfork: OK. I'll look at it. Thanks22:27
mjforkfloating ips are different than private22:27
davepigottmjfork: Ah. ok22:27
mjforkevery isntance gets a fixed IP22:27
mjforkfloating ones are assigned on demand, not automateically22:27
dspanoYou can query your fixed ips with nova-manage fixed list22:29
davepigottdspano: Yeah, with that I get the 192.168.2.x list22:30
*** blamar_ has joined #openstack22:30
dspanodavepigott: Is 192.168.4.0/24 supposed to be your floating ips?22:30
davepigottdspano: I have no idea where that number came from22:31
*** russf has joined #openstack22:31
*** russf has quit IRC22:31
*** phschwartz|rem has quit IRC22:34
*** chaosdonkey has joined #openstack22:35
*** mnabil has joined #openstack22:35
*** natea has quit IRC22:35
*** hub_cap has quit IRC22:36
*** phschwartz|rem has joined #openstack22:37
*** ibarrera has quit IRC22:37
davepigottdspano: When I created the floating list I set it to 192.168.2.0/2422:37
dspanoAre your fixed and floating on the same subnet?22:38
*** russf has joined #openstack22:38
*** mattray1 is now known as mattray22:39
*** mattray has joined #openstack22:39
*** russf has quit IRC22:41
dspanodavepiggott: Sorry, I'm toggling back in forth between windows. Looking back at your original log entry. You said it wasn't doing it with every instance correct?22:41
dspanodavepiggott: When that happened to me, it was because some fixed ips never got unreserved in the fixed_ips table, but the dnsmasq server was leasing them anyway.22:42
dspanodavepiggott: I don't remember this being a problem when I was using just a single network controller. I think it may have had to do with using multi-host.22:43
*** phschwartz|rem has quit IRC22:43
davepigottdspano: No problem. I'm doing the same. Hmm. But even when an ip address *seems* to be assigned, I just can't ping or ssh to it, and vcn shows no instances22:43
dspanodavepiggott: My quick fix was unreserving them and restarting the network controller. I still haven't had time to research enough to see if anyone found a permanent solution.22:43
davepigottdspano: How do I unreserve them? In the db?22:44
davepigottdspano: Which db/table/column?22:45
dspanodavepiggott: After you do unreserve the offending ips, I would stop nova-network, run 'killall dnsmasq' then restart it.22:45
*** russf has joined #openstack22:45
davepigottdspano: Yeah, I've seen that suggested before. Do I have to kill the instance and its image22:46
*** phschwartz|rem has joined #openstack22:46
davepigottdspano: But how do I unreserve the ips??22:46
*** jj0hns0n has quit IRC22:46
*** apebit has quit IRC22:47
*** sandywalsh has quit IRC22:47
davepigottdspano: Oh. Wait. Just reading the network management stuff. May know what's happening22:47
*** Remco_ has quit IRC22:48
dspanodavepiggot: The proper way is to use nova-manage fixed unreserve.22:48
archit_Could anybody , say how can we hardly enter a floating IP to an instance .. than auto allocation ..22:49
*** davepigott has quit IRC22:49
dspanodavepiggott: The hard way is logging into your database and running update fixed_ips set reserved=false where address=<address your having trouble with>.22:51
*** cp16net has quit IRC22:52
*** ppradhan has left #openstack22:53
*** vincentricci has joined #openstack22:53
*** Vince_ has quit IRC22:54
*** Darkskill260 has quit IRC22:55
*** AlanClark has quit IRC22:56
dspanodavepiggot: Good luck. I've gotta run before my wife kills me. I'll be on tomorrow if you need anymore help.22:56
*** davepigott has joined #openstack22:56
*** bengrue has joined #openstack22:56
*** ayoung is now known as ayoung-home22:58
*** natea has joined #openstack22:58
*** zzed has quit IRC22:58
*** aculich has quit IRC22:59
*** dspano has quit IRC22:59
*** sandywalsh has joined #openstack23:02
*** Gordonz has quit IRC23:03
*** aloga has quit IRC23:03
*** nati2 has joined #openstack23:05
*** dendrobates is now known as dendro-afk23:06
*** martianixor has joined #openstack23:06
*** apebit has joined #openstack23:06
*** ahasenack has joined #openstack23:06
*** dendro-afk is now known as dendrobates23:08
*** mnabil has quit IRC23:09
*** albert23 has quit IRC23:09
*** clauden_ has quit IRC23:09
*** lloydde has quit IRC23:09
*** robbiew has quit IRC23:09
*** dovetaildan has quit IRC23:09
*** lts has quit IRC23:09
*** nacx has quit IRC23:09
*** gcc has quit IRC23:09
*** Ruetobas has quit IRC23:09
*** odyi has quit IRC23:09
*** gcc has joined #openstack23:09
*** dovetaildan has joined #openstack23:09
*** Ruetobas has joined #openstack23:09
*** Yak-n-Yeti has quit IRC23:09
*** nacx has joined #openstack23:09
*** lts has joined #openstack23:09
*** lloydde has joined #openstack23:09
*** mnabil has joined #openstack23:09
*** albert23 has joined #openstack23:09
*** llang629 has joined #openstack23:09
*** gilbakrunk has quit IRC23:09
*** robbiew has joined #openstack23:09
*** aloga has joined #openstack23:10
*** imsplitbit has quit IRC23:20
*** littleidea has quit IRC23:24
*** jj0hns0n has joined #openstack23:26
*** nati2 has quit IRC23:27
*** mgius has joined #openstack23:30
*** MarkAtwood has joined #openstack23:30
*** jj0hns0n has quit IRC23:30
*** odyi has joined #openstack23:32
*** odyi has joined #openstack23:32
*** cryptk is now known as cryptk|offline23:44
*** rnorwood has quit IRC23:45
*** mrjazzcat has joined #openstack23:46
*** mattray has quit IRC23:47
wariknotmyname:  real quick, what the "workers" flag stands for in the swift config files?23:47
notmynamewarik: how many child processes are forked to do work. 1 per core is a good starting point23:48
warikthanks!23:48
*** Leseb has quit IRC23:50
*** RicardoSSP has joined #openstack23:51
*** Yak-n-Yeti has joined #openstack23:55
*** lloydde has quit IRC23:56
*** apebit has quit IRC23:56
*** Brlink has joined #openstack23:56
*** thingee has joined #openstack23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!