*** stewart has joined #openstack | 00:01 | |
*** tryggvil_ has quit IRC | 00:04 | |
*** deirdre_ has quit IRC | 00:04 | |
*** vladimir3p has quit IRC | 00:06 | |
*** freeflying has joined #openstack | 00:14 | |
nhm | anyone tried setting up hetrogeneous compute nodes? | 00:15 |
---|---|---|
dchalloner | Does anyone know how you go about changing the auth url openstack reports back to the client? Right now mine is https://<ip address> and I think java will never do SSL to an IP even if you set the CN correctly. | 00:15 |
*** ewindisch has quit IRC | 00:17 | |
*** ccc11 has joined #openstack | 00:19 | |
nhm | woo, our grant got funded. I've got about $170k to spend on an openstack deployment. :D | 00:24 |
*** deirdre_ has joined #openstack | 00:34 | |
*** shaon_ has quit IRC | 00:35 | |
heckj | nhm: congrats! | 00:35 |
*** ton_katsu has joined #openstack | 00:37 | |
*** jakedahn has quit IRC | 00:38 | |
*** obino has quit IRC | 00:43 | |
*** worstadmin has joined #openstack | 00:43 | |
*** tryggvil_ has joined #openstack | 00:45 | |
*** mszilagyi has quit IRC | 00:46 | |
*** obino has joined #openstack | 00:47 | |
*** sam_itx has joined #openstack | 00:48 | |
*** HugoKuo has joined #openstack | 00:54 | |
*** jj0hns0n has joined #openstack | 00:55 | |
HugoKuo | Morning guys | 00:55 |
*** shaon has joined #openstack | 00:58 | |
*** obino has quit IRC | 00:58 | |
nhm | heckj: thanks! | 00:59 |
*** heckj has quit IRC | 00:59 | |
*** nerdstein has quit IRC | 01:08 | |
*** ncode has joined #openstack | 01:12 | |
*** shaon has quit IRC | 01:12 | |
thickskin | hi all | 01:13 |
thickskin | is there someone who know about qcow2 image using in xen? | 01:13 |
HugoKuo | hi all | 01:15 |
*** nerdstein has joined #openstack | 01:15 | |
*** jdurgin has quit IRC | 01:16 | |
*** tryggvil_ has quit IRC | 01:16 | |
*** iOutBackDngo is now known as OutBackDingo | 01:19 | |
*** nerdstein has quit IRC | 01:19 | |
*** Alowishus has joined #openstack | 01:27 | |
*** jakedahn has joined #openstack | 01:28 | |
*** rfz__ has joined #openstack | 01:31 | |
*** rfz_ has quit IRC | 01:31 | |
*** mrrk has quit IRC | 01:31 | |
*** johnmark has left #openstack | 01:36 | |
*** rfz__ has quit IRC | 01:39 | |
*** deepest has joined #openstack | 01:43 | |
*** jtanner_ has quit IRC | 01:47 | |
*** mattray has joined #openstack | 01:51 | |
*** jakedahn has quit IRC | 01:52 | |
*** jfluhmann has joined #openstack | 01:52 | |
*** James has joined #openstack | 01:53 | |
*** James is now known as Guest39684 | 01:54 | |
*** llang629_ has quit IRC | 01:57 | |
*** mrrk has joined #openstack | 02:02 | |
*** jtanner_ has joined #openstack | 02:05 | |
*** stewart has quit IRC | 02:07 | |
*** deirdre_ has quit IRC | 02:07 | |
*** huslage has quit IRC | 02:08 | |
*** msivanes has joined #openstack | 02:10 | |
*** jtanner_ has quit IRC | 02:11 | |
*** cereal_bars has joined #openstack | 02:16 | |
*** clauden has quit IRC | 02:25 | |
*** mat_angin is now known as blackshirt | 02:27 | |
*** llang629_ has joined #openstack | 02:29 | |
*** llang629_ is now known as llang629 | 02:29 | |
*** HugoKuo has quit IRC | 02:31 | |
*** HugoKuo has joined #openstack | 02:31 | |
*** mattray has quit IRC | 02:31 | |
*** HugoKuo has quit IRC | 02:32 | |
*** HugoKuo has joined #openstack | 02:32 | |
*** fayce has joined #openstack | 02:32 | |
*** HugoKuo has quit IRC | 02:34 | |
*** HugoKuo1 has joined #openstack | 02:35 | |
*** blackshirt is now known as mat_angin | 02:36 | |
*** miclorb has quit IRC | 02:43 | |
*** anotherjesse has quit IRC | 02:44 | |
*** vernhart has quit IRC | 02:44 | |
*** rms-ict has joined #openstack | 02:44 | |
*** chomping has joined #openstack | 02:52 | |
*** fayce has quit IRC | 02:57 | |
*** jj0hns0n has quit IRC | 02:58 | |
*** jj0hns0n has joined #openstack | 03:05 | |
*** vernhart has joined #openstack | 03:08 | |
*** rms-ict has quit IRC | 03:14 | |
*** novemberstorm has joined #openstack | 03:14 | |
*** novemberstorm has quit IRC | 03:20 | |
*** cereal_bars has quit IRC | 03:22 | |
*** jj0hns0n has quit IRC | 03:22 | |
*** mattray has joined #openstack | 03:26 | |
*** osier has joined #openstack | 03:27 | |
*** jj0hns0n has joined #openstack | 03:29 | |
*** t9md has joined #openstack | 03:33 | |
*** DrHouseMD is now known as HouseAway | 03:39 | |
*** nati has joined #openstack | 03:40 | |
*** mdomsch has joined #openstack | 03:41 | |
*** dhanuxe has joined #openstack | 03:49 | |
*** HouseAway has quit IRC | 03:53 | |
*** rms-ict has joined #openstack | 04:06 | |
*** techthumb has joined #openstack | 04:09 | |
*** jfluhmann has quit IRC | 04:10 | |
techthumb | is there a tutorial to get nova-compute to talk to an esxi host? | 04:10 |
*** jfluhmann has joined #openstack | 04:10 | |
*** techthumb has quit IRC | 04:12 | |
*** miclorb has joined #openstack | 04:13 | |
*** jfluhmann has quit IRC | 04:15 | |
*** katkee has joined #openstack | 04:16 | |
*** msivanes has quit IRC | 04:17 | |
*** llang629 has left #openstack | 04:19 | |
*** martine has joined #openstack | 04:24 | |
*** nelson____ has joined #openstack | 04:24 | |
*** katkee has quit IRC | 04:26 | |
*** rms-ict has quit IRC | 04:27 | |
*** katkee has joined #openstack | 04:28 | |
*** HugoKuo has joined #openstack | 04:29 | |
*** HugoKuo1 has quit IRC | 04:29 | |
*** HugoKuo has quit IRC | 04:32 | |
*** HugoKuo has joined #openstack | 04:32 | |
*** jj0hns0n has quit IRC | 04:33 | |
*** jj0hns0n has joined #openstack | 04:33 | |
*** HugoKuo has quit IRC | 04:44 | |
*** HugoKuo has joined #openstack | 04:44 | |
*** martine has quit IRC | 04:44 | |
*** dgags has joined #openstack | 04:49 | |
*** reed has quit IRC | 04:51 | |
*** mattray has quit IRC | 04:53 | |
*** worstadmin_ has joined #openstack | 04:55 | |
*** worstadmin has quit IRC | 04:55 | |
*** dgags has quit IRC | 05:05 | |
uvirtbot | New bug: #824967 in nova "Parent instance is not bound to a Session; lazy load operation of attribute 'instance' cannot proceed" [Undecided,In progress] https://launchpad.net/bugs/824967 | 05:06 |
*** scollier has quit IRC | 05:15 | |
*** scollier has joined #openstack | 05:17 | |
*** j05h has joined #openstack | 05:17 | |
*** martine has joined #openstack | 05:23 | |
*** mshadle has left #openstack | 05:23 | |
*** SplasPood has quit IRC | 05:35 | |
*** SplasPood has joined #openstack | 05:36 | |
*** SplasPood has joined #openstack | 05:38 | |
*** martine has quit IRC | 05:39 | |
*** tsuzuki has joined #openstack | 05:47 | |
*** HugoKuo has quit IRC | 05:54 | |
*** ton_katsu has quit IRC | 06:09 | |
*** ton_katsu has joined #openstack | 06:10 | |
*** viveksnv has joined #openstack | 06:11 | |
viveksnv | hi all | 06:11 |
viveksnv | can we use different virualization models like kvm, Xen, qemu etc..with single openstack setup ?.. | 06:12 |
*** obino has joined #openstack | 06:13 | |
viveksnv | i have a ubuntu serevr with intel-VT supportable hardware..how can i try different virtualization models...? | 06:14 |
viveksnv | is it possible ? | 06:14 |
alekibango | viveksnv: it is | 06:16 |
alekibango | but not in single openstack setup i am afraid | 06:16 |
*** ejat has joined #openstack | 06:16 | |
*** rchavik has joined #openstack | 06:16 | |
*** llang629_ has joined #openstack | 06:18 | |
*** guigui has joined #openstack | 06:19 | |
viveksnv | alekibango: thanks for you replt..i am confused with different things...what is role of nova-compute ?... | 06:20 |
viveksnv | alekibango: one nova-compute will work with one virtual model..(one for kvm, one for xen, one for qemu) ? | 06:20 |
*** llang629 has joined #openstack | 06:23 | |
*** llang629 has left #openstack | 06:23 | |
*** llang629_ has quit IRC | 06:26 | |
dhanuxe | hello... | 06:32 |
dhanuxe | how to fix error of bug from this url ? http://j.mp/p6g4oQ | 06:32 |
alekibango | compute is here for managing virtual guests | 06:34 |
alekibango | you need to pick one for one install | 06:35 |
alekibango | iirc | 06:35 |
*** deepest has quit IRC | 06:35 | |
*** zul has joined #openstack | 06:38 | |
*** zul has quit IRC | 06:42 | |
*** llang629_ has joined #openstack | 06:49 | |
*** anotherjesse has joined #openstack | 06:52 | |
*** viveksnv has quit IRC | 06:54 | |
*** mrrk has quit IRC | 06:54 | |
*** kidrock has joined #openstack | 06:58 | |
kidrock | Hi everyone. | 06:58 |
kidrock | I installed newest nova milestone version | 06:58 |
kidrock | create zipfile and run source novarc | 06:59 |
*** javiF has joined #openstack | 06:59 | |
kidrock | euca-describe-instances | 06:59 |
kidrock | flowing error occurred | 07:00 |
kidrock | http://paste.openstack.org/show/2148/ | 07:00 |
kidrock | pls help me. Thanks | 07:00 |
*** deepest has joined #openstack | 07:02 | |
*** mgoldmann has joined #openstack | 07:12 | |
*** cbeck has quit IRC | 07:13 | |
*** cbeck has joined #openstack | 07:13 | |
*** zul has joined #openstack | 07:16 | |
*** siwos has joined #openstack | 07:22 | |
*** mrrk has joined #openstack | 07:24 | |
*** nicolas2b has joined #openstack | 07:25 | |
*** katkee has quit IRC | 07:26 | |
*** miclorb has quit IRC | 07:45 | |
*** anotherjesse has quit IRC | 07:45 | |
*** truijllo has joined #openstack | 07:53 | |
*** dhanuxe has quit IRC | 07:56 | |
uvirtbot | New bug: #825024 in glance "'glance add' treats size=xxx as a normal property" [Undecided,New] https://launchpad.net/bugs/825024 | 07:56 |
*** javiF has quit IRC | 07:57 | |
*** javiF has joined #openstack | 07:57 | |
*** zul has quit IRC | 08:01 | |
*** nickon has joined #openstack | 08:01 | |
*** nicolas2b has quit IRC | 08:02 | |
*** mrrk has quit IRC | 08:02 | |
*** mrrk has joined #openstack | 08:05 | |
*** rms-ict has joined #openstack | 08:08 | |
*** teamrot has quit IRC | 08:12 | |
*** willaerk has joined #openstack | 08:14 | |
deepest | Hi everyone! | 08:14 |
deepest | I want to ask you about the Swift again | 08:15 |
deepest | I received some different information about the Swift | 08:15 |
deepest | some people tell me about the limited of small disk drive in cluster. | 08:16 |
deepest | if first storage node = 100GB, second storage node = 250 GB, third storage node = 500GB, then I send 120GB to Swift and get failed when the first storage node full. | 08:16 |
deepest | It means Swift Architecture doesn't have mechanism to change the location when one or more than one disk run out of space. | 08:17 |
deepest | Another information is that Swift doesn't care about the drive disk you have, Swift just care about the total space of disk drive. | 08:17 |
deepest | It means, If you have thousands disk drive and sorted by the number from 1 to n, then you transfer data to Swift. If the 1st drive full, Swift will take the rest of data to another location. | 08:17 |
deepest | I am really confusing, Do u have any document or tutorial mention about that information, please give it to me. | 08:17 |
*** guigui has quit IRC | 08:19 | |
*** rms-ict has quit IRC | 08:26 | |
deepest | any ideas? | 08:26 |
*** jeffjapan has quit IRC | 08:27 | |
*** rms-ict has joined #openstack | 08:32 | |
*** anp_ has quit IRC | 08:39 | |
*** zul has joined #openstack | 08:41 | |
*** tryggvil has joined #openstack | 08:49 | |
*** guigui has joined #openstack | 08:50 | |
*** CloudAche84 has joined #openstack | 08:53 | |
*** mrrk has quit IRC | 08:56 | |
*** deepest has quit IRC | 08:57 | |
*** deepest has joined #openstack | 08:58 | |
deepest | Hi everyone! | 08:58 |
deepest | I want to ask you about the Swift again | 08:58 |
CloudAche84 | morning | 08:58 |
*** rods has joined #openstack | 08:58 | |
deepest | I received some different information about the Swift | 08:59 |
*** darraghb has joined #openstack | 08:59 | |
deepest | some people tell me about the limited of small disk drive in cluster. | 08:59 |
deepest | if first storage node = 100GB, second storage node = 250 GB, third storage node = 500GB, then I send 120GB to Swift and get failed when the first storage node full. | 08:59 |
deepest | It means Swift Architecture doesn't have mechanism to change the location when one or more than one disk run out of space. | 08:59 |
deepest | Another information is that Swift doesn't care about the drive disk you have, Swift just care about the total space of disk drive | 08:59 |
deepest | It means, If you have thousands disk drive and sorted by the number from 1 to n, then you transfer data to Swift. If the 1st drive full, Swift will take the rest of data to another location. | 08:59 |
deepest | I am really confusing, Do u have any document or tutorial mention about that information, please give it to me. | 08:59 |
deepest | hi ClouAche84 | 09:00 |
*** ejat has quit IRC | 09:00 | |
CloudAche84 | how many disks do you have in total is it just 3? | 09:00 |
*** BuZZ-T has quit IRC | 09:00 | |
deepest | no | 09:00 |
deepest | my mean is not just 3 | 09:01 |
deepest | 2 kind of different information | 09:01 |
deepest | first swift cares about size of each drive | 09:02 |
deepest | second swift only cares about the total size of all drive | 09:02 |
deepest | this is what I feel confuse | 09:03 |
CloudAche84 | so how many disks do you have and how many nodes? | 09:03 |
deepest | ah now I have 5 drive | 09:03 |
deepest | with 5 node | 09:04 |
deepest | and diffrerent size | 09:04 |
*** katkee has joined #openstack | 09:04 | |
deepest | what happend if I send the size of data more than size of smallest drive | 09:05 |
*** BuZZ-T has joined #openstack | 09:05 | |
*** BuZZ-T has joined #openstack | 09:05 | |
*** tryggvil has quit IRC | 09:05 | |
deepest | some guys said no problem, but some guys said failed | 09:06 |
*** anp has joined #openstack | 09:08 | |
anp | hi | 09:08 |
anp | on installing openstack CC and node on same machine | 09:08 |
anp | I get error: | 09:08 |
anp | Error: openstack-nova-compute-config conflicts with openstack-nova-cc-config | 09:08 |
anp | Error: openstack-nova-cc-config conflicts with openstack-nova-compute-config | 09:08 |
anp | I use CentOS 6 and Griddynamics repo | 09:08 |
anp | please help me | 09:08 |
*** deepest_ has joined #openstack | 09:14 | |
deepest_ | back again | 09:14 |
deepest_ | lost connection | 09:14 |
deepest_ | CloudAche84, are U there? | 09:14 |
*** deepest has quit IRC | 09:14 | |
*** kidrock has quit IRC | 09:16 | |
*** tryggvil has joined #openstack | 09:17 | |
*** rms-ict has quit IRC | 09:19 | |
*** irahgel has joined #openstack | 09:21 | |
*** deepest_ has quit IRC | 09:22 | |
*** deepest has joined #openstack | 09:27 | |
*** chomping has quit IRC | 09:28 | |
*** rms-ict has joined #openstack | 09:28 | |
*** deepest has quit IRC | 09:32 | |
uvirtbot | New bug: #825074 in nova "Release floating IP with OS API" [Undecided,New] https://launchpad.net/bugs/825074 | 09:32 |
*** chomping has joined #openstack | 09:32 | |
*** rms-ict has quit IRC | 09:34 | |
*** chomping has quit IRC | 09:35 | |
*** chomping has joined #openstack | 09:36 | |
*** rms-ict has joined #openstack | 09:36 | |
*** ccc11 has quit IRC | 09:36 | |
*** rms-ict has quit IRC | 09:47 | |
*** ton_katsu has quit IRC | 09:48 | |
*** arun_ has quit IRC | 09:50 | |
*** arun_ has joined #openstack | 09:50 | |
*** oziaczek has joined #openstack | 10:00 | |
oziaczek | im deploying openstack with the deployment tool. i install whole packet with glance and swift as well. during nova installation i got 2011-08-12 10:28:22,431 - ERROR - The process id of nova-volume is changing. 30254 -> 30334 2011-08-12 10:28:22,431 - ERROR - Error occured when starting the service(nova-volume). | 10:01 |
oziaczek | i created lvm volume | 10:01 |
oziaczek | i named it nova-volumes, everything seems fine in configuration, but it doesn't work | 10:02 |
oziaczek | i run service nova-volume start, i get information that it is running | 10:02 |
oziaczek | but later i can't find it in nova-manage service list | 10:03 |
oziaczek | anyone with some idea what is going on? | 10:03 |
*** medberry is now known as med_out | 10:06 | |
*** ewindisch has joined #openstack | 10:08 | |
*** ton_katsu has joined #openstack | 10:12 | |
*** SplasPood has quit IRC | 10:13 | |
*** SplasPood has joined #openstack | 10:13 | |
*** ton_katsu has quit IRC | 10:14 | |
*** ton_katsu has joined #openstack | 10:14 | |
*** jj0hns0n has joined #openstack | 10:16 | |
*** Ryan_Lane has joined #openstack | 10:16 | |
*** miclorb has joined #openstack | 10:21 | |
viraptor | oziaczek: /var/log/nova/nova-volume will tell you the truth... | 10:22 |
*** shang has quit IRC | 10:25 | |
*** tsuzuki has quit IRC | 10:27 | |
*** miclorb has quit IRC | 10:28 | |
*** worstadmin_ has quit IRC | 10:34 | |
*** miclorb has joined #openstack | 10:34 | |
*** miclorb has quit IRC | 10:35 | |
*** t9md has quit IRC | 10:39 | |
oziaczek | yes i got it! don't know why i haven't checked them before! | 10:42 |
oziaczek | thanks | 10:42 |
*** Ryan_Lane has quit IRC | 10:48 | |
*** ewindisch has quit IRC | 10:49 | |
*** lorin1 has joined #openstack | 10:51 | |
*** jj0hns0n has quit IRC | 10:55 | |
*** zul has quit IRC | 10:59 | |
*** infinite-scale has joined #openstack | 11:07 | |
*** AhmedSoliman has joined #openstack | 11:14 | |
*** ton_katsu has quit IRC | 11:15 | |
*** nid0 has quit IRC | 11:22 | |
*** mies has quit IRC | 11:26 | |
*** nid0 has joined #openstack | 11:26 | |
*** chomping has quit IRC | 11:27 | |
*** chomping has joined #openstack | 11:27 | |
*** ncode has quit IRC | 11:27 | |
*** martine has joined #openstack | 11:28 | |
*** chomping has quit IRC | 11:30 | |
*** joearnold has joined #openstack | 11:32 | |
*** mfer has joined #openstack | 11:46 | |
*** jtanner_ has joined #openstack | 11:47 | |
*** nerdstein has joined #openstack | 11:48 | |
*** AhmedSoliman has quit IRC | 12:00 | |
*** dendro-afk is now known as dendrobates | 12:01 | |
*** nicolas2b has joined #openstack | 12:01 | |
*** ncode has joined #openstack | 12:02 | |
*** nicolas2b has quit IRC | 12:04 | |
*** Ephur has joined #openstack | 12:19 | |
*** infinite-scale has quit IRC | 12:19 | |
*** dprince has joined #openstack | 12:28 | |
*** aliguori has joined #openstack | 12:29 | |
*** joearnold has quit IRC | 12:29 | |
*** mancdaz has quit IRC | 12:30 | |
*** mancdaz has joined #openstack | 12:30 | |
*** manish has joined #openstack | 12:33 | |
*** msivanes has joined #openstack | 12:35 | |
*** nelson____ has quit IRC | 12:37 | |
*** Ephur has quit IRC | 12:37 | |
*** shang has joined #openstack | 12:38 | |
*** shang has quit IRC | 12:38 | |
*** nmistry has joined #openstack | 12:42 | |
*** javiF has quit IRC | 12:43 | |
*** dendrobates is now known as dendro-afk | 12:47 | |
*** MarkAtwood has quit IRC | 12:48 | |
*** bsza has joined #openstack | 12:57 | |
*** PeteDaGuru has joined #openstack | 12:58 | |
*** allsystemsarego has joined #openstack | 12:59 | |
*** marrusl has joined #openstack | 13:03 | |
*** mdomsch has quit IRC | 13:04 | |
*** huslage has joined #openstack | 13:05 | |
*** lts has joined #openstack | 13:05 | |
*** dendro-afk is now known as dendrobates | 13:08 | |
*** jtanner_ has quit IRC | 13:10 | |
*** kashyap_ has joined #openstack | 13:16 | |
*** kbringard has joined #openstack | 13:17 | |
*** CloudAche84 has quit IRC | 13:19 | |
*** CloudAche84 has joined #openstack | 13:21 | |
*** ewindisch has joined #openstack | 13:22 | |
*** nmistry has quit IRC | 13:23 | |
*** whitt has joined #openstack | 13:24 | |
*** dendrobates is now known as dendro-afk | 13:27 | |
*** gnu111 has joined #openstack | 13:28 | |
*** jtanner has joined #openstack | 13:29 | |
*** dendro-afk is now known as dendrobates | 13:30 | |
*** ccc11 has joined #openstack | 13:34 | |
*** lborda has joined #openstack | 13:39 | |
*** mfer has quit IRC | 13:41 | |
*** jtanner has quit IRC | 13:43 | |
*** jtanner has joined #openstack | 13:44 | |
*** jfluhmann has joined #openstack | 13:47 | |
*** Dunkirk has joined #openstack | 13:47 | |
Dunkirk | Started from scratch, followed these instructions: http://wiki.openstack.org/RunningNova, and I can see in VNC that my VM is stuck at trying to boot at the SeaBIOS screen. | 13:48 |
*** duffman has quit IRC | 13:50 | |
*** dendrobates is now known as dendro-afk | 13:52 | |
Dunkirk | I've seen this behavior before, but then I was getting messages about networking. I've scrapped the trunk packages and gone back to release, and I'm not seeing anything in the logs about networking now. | 13:53 |
Dunkirk | In any case, shouldn't following the instructions get me up and running? I don't understand what's wrong, so I don't have a clue as to what to try differently. | 13:54 |
*** cereal_bars has joined #openstack | 13:54 | |
Dunkirk | Can someone have pity on me and hit me with a cluebat? | 13:54 |
*** freeflying has quit IRC | 13:55 | |
dilemma | I think it's a bit early in the day. Those who wield the cluebats are still sleeping/breakfasting. | 13:56 |
*** TimR-L has joined #openstack | 13:56 | |
kbringard | when you say "Started from scratch", did you completely reinstall the operating system, or just reinstall OpenStack? | 13:57 |
kbringard | Dunkirk ^^ | 13:57 |
*** lborda has quit IRC | 13:58 | |
Dunkirk | kbringard: Just OpenStack. | 13:59 |
Dunkirk | kbringard: I checked all the nova-manage options to make sure that nothing was left defined, though. | 13:59 |
kbringard | on your compute node, I'd check the _base directory | 13:59 |
kbringard | and do a glance index | 13:59 |
kbringard | make sure the images you've uploaded are the correct size | 13:59 |
kbringard | and that it matches what's in _base | 13:59 |
Dunkirk | kbringard: This is new info. Where's _base? | 14:00 |
*** mrjazzcat-afk is now known as mrjazzcat | 14:00 | |
kbringard | in /var/lib/nova/instances | 14:00 |
kbringard | so, the quick shibby on how it works | 14:00 |
kbringard | I have no idea if this is what the issue is, btw | 14:00 |
kbringard | but sometimes this happens | 14:00 |
kbringard | glance pulls the image from where ever (by default it's /var/lib/glance/images/) | 14:01 |
kbringard | and serves it up via the API | 14:01 |
kbringard | the compute node downloads the image and stores it in /var/lib/nova/instances/_base (by default) | 14:01 |
kbringard | it then uses this as the backing image when it builds out the individual disk for the instance | 14:01 |
*** bcwaldon has joined #openstack | 14:01 | |
kbringard | and it keeps it cached, so subsequent instances launch more quickly | 14:02 |
kbringard | I've seen it, sometimes, get a bad image in the cache in _base on the compute node | 14:02 |
Dunkirk | kbringard: Well I don't know how big the image should be, but `glance index' is giving me an "unable to connect to server". | 14:02 |
kbringard | are you running it on the server glance is installed on? | 14:03 |
Dunkirk | Yeah, it's all one server | 14:03 |
kbringard | that would probably be your problem | 14:03 |
kbringard | check the glance logs | 14:03 |
kbringard | make sure it's running and accepting connections | 14:03 |
*** duffman has joined #openstack | 14:04 | |
Dunkirk | OK, I've started glance-api and glance-registry... | 14:04 |
*** LiamMac has joined #openstack | 14:04 | |
kbringard | does glance index work now? | 14:05 |
Dunkirk | kbringard: it does, but it doesn't show any images... | 14:05 |
kbringard | there's your second problem :-) | 14:05 |
kbringard | I'm guessing you used the uec-publish stuff to upload them? | 14:05 |
Dunkirk | Roger | 14:06 |
kbringard | so, real quick, the way that works | 14:06 |
kbringard | and sorry, i'm just trying to help you (and whomever else) understand how this works so you can troubleshoot it better in the future :-) | 14:06 |
kbringard | casuse it can be a black box, haha | 14:06 |
kbringard | so, the publish stuff was written for eucalyptus | 14:06 |
Dunkirk | kbringard: Dude, please, treat me like I'm a noob. Cause I am. | 14:06 |
kbringard | which uses the ec2 api | 14:06 |
kbringard | so what happens in oenstack is | 14:07 |
alekibango | kbringard: there is etherpad page about this. let me find it | 14:07 |
kbringard | objectstore is what does the ec2 bundle stuff | 14:07 |
kbringard | ah, that would be helpful | 14:07 |
*** duffman has quit IRC | 14:07 | |
kbringard | it's probably more coherent than my "only 1 cup of coffee" rambling | 14:07 |
kbringard | but, the quick of it is | 14:07 |
Dunkirk | kbringard: Heh. | 14:07 |
kbringard | the uec-publish stuff uploads the stuff to objectstore | 14:08 |
alekibango | kbringard: but it might be wrong a bit, please continue there http://etherpad.openstack.org/create-instance-openstack | 14:08 |
kbringard | which was running | 14:08 |
kbringard | objectstore then unbundles the images and does all that fun stuff | 14:08 |
kbringard | and then uploads them to glance | 14:08 |
kbringard | so what most likely happened is, it shat itself because glance wasn't running | 14:08 |
*** dendro-afk is now known as dendrobates | 14:09 | |
kbringard | in theory, there should be something in the objectstore log | 14:09 |
kbringard | but, regardless, that is likely what happened | 14:09 |
kbringard | so, now that glance is running, I would try uploading your images again | 14:09 |
kbringard | and make sure you can see them in glance index | 14:09 |
alekibango | the relation between eucatools, nova-manage and glance and s3 storage in nova options should be better documented | 14:09 |
kbringard | and that they have sane sizes | 14:09 |
kbringard | I personally upload straight into glance | 14:10 |
alekibango | kbringard: when i did this, it never worked right for me | 14:10 |
kbringard | if you have ruby and rubygems running, you can gem install ogler | 14:10 |
alekibango | even stackops is failing on that one somehow | 14:10 |
kbringard | it's a glance uploader I wrote, and, at least for my environments, it works quite nicely | 14:10 |
*** grapex has joined #openstack | 14:10 | |
alekibango | kbringard: its long time since i tried | 14:10 |
Dunkirk | kbringard: I'll definitely check that out. | 14:10 |
alekibango | i should test latest prolly | 14:11 |
kbringard | or, alternatively | 14:11 |
Dunkirk | If I just re-do the publish command, it's telling me that the kernel image is already registered. | 14:11 |
kbringard | https://github.com/kevinbringard/OpenStack-tools/blob/master/glance-uploader.bash | 14:11 |
kbringard | not as elegant | 14:11 |
Dunkirk | How do I get nova to realize that it's not. | 14:11 |
alekibango | kbringard: i want ot be ableto use euca tools -- and upload image to glance using them... how? | 14:11 |
kbringard | but it wraps around the glance commands to upload images straight into glance | 14:11 |
*** imsplitbit has joined #openstack | 14:12 | |
kbringard | alekibango: you'll need to rely on the objectstore "middleware" | 14:12 |
Dunkirk | kbringard: That's fantastic! I really need to see what things are going on behind the scenes. | 14:12 |
kbringard | since it handles the decrypting and unbundling | 14:12 |
alekibango | hmm it sounds like you explained it by one line for me... :) | 14:12 |
*** shentonfreude has joined #openstack | 14:12 | |
alekibango | excetp that some example config would be very nice to be in wiki | 14:12 |
kbringard | glance doesn't talk ec2 | 14:12 |
alekibango | kbringard: i know... but i was not able to connect them together | 14:13 |
kbringard | in theory having objectstore running on the same machine as glance should "just work" | 14:13 |
alekibango | if you could please add some info into wiki, we would all be gratefull | 14:13 |
alekibango | kbringard: taht thheory didnt work for me | 14:13 |
alekibango | :) | 14:13 |
alekibango | and please provide example config(s) | 14:14 |
kbringard | but in truth I've not used it much… I switched to pure glance quite a long time ago and don't even have objectstore installed | 14:14 |
kbringard | I'd have to figure it out myself before I could provide examples | 14:14 |
alekibango | i had all kinds of hell with glance, trying to make it run before summer | 14:14 |
kbringard | but if I have some time I'll look at it | 14:14 |
Dunkirk | So... how do I "unregister" a tarball? | 14:15 |
alekibango | kbringard: thanks, i can wait abit | 14:15 |
*** ldlework has joined #openstack | 14:15 | |
alekibango | but think about other noobs too | 14:15 |
alekibango | its pain | 14:15 |
Dunkirk | ...From "nova", so that I can re-do it and get it into "glance"? | 14:15 |
alekibango | :) | 14:15 |
kbringard | yea, documentation as a whole is something we're lacking… I think there is a documentation sprint scheduled pretty soon here | 14:15 |
kbringard | so hopefully in the near future things will get better on that front | 14:15 |
alekibango | you need to get more people writing... not accepting patch without docs could be a plus | 14:17 |
kbringard | yea… and tests too | 14:17 |
alekibango | functional tests | 14:17 |
alekibango | documented, open | 14:17 |
alekibango | (serving also as example for noobs) | 14:18 |
alekibango | that would help bring openstack above other platforms | 14:18 |
kbringard | Dunkirk: I'm not sure… /me thinks | 14:18 |
alekibango | right now i am playing with archipel | 14:18 |
alekibango | and well, its so easy to get running | 14:18 |
alekibango | just one command | 14:18 |
*** lotrpy has quit IRC | 14:18 | |
alekibango | :) | 14:18 |
*** osier has quit IRC | 14:18 | |
kbringard | that's awesome | 14:18 |
alekibango | well, it lacks some features. but its enough for many | 14:19 |
alekibango | and those people could get openstack, if that would be easier | 14:19 |
*** benner_ has quit IRC | 14:19 | |
*** lotrpy has joined #openstack | 14:19 | |
*** Gordonz has quit IRC | 14:19 | |
*** benner has joined #openstack | 14:20 | |
alekibango | by making it hard for small businesses, you are loosing big share | 14:20 |
Dunkirk | kbringard: I think I found it: euca-deregister? | 14:20 |
alekibango | openstack is good for 40, or 100+ servers | 14:20 |
kbringard | Dunkirk: sounds right | 14:20 |
*** amccabe has joined #openstack | 14:20 | |
alekibango | should be easy for 3 too | 14:20 |
kbringard | yea, agreed | 14:22 |
*** mu574n9 has joined #openstack | 14:24 | |
Dunkirk | kbringard: That seemed to do the right thing, but euca-publish didn't get the images registered with glance anyway. :-( | 14:25 |
annegentle | alekibango: and kbringard are you hoping to sprint on docs as well? | 14:27 |
kbringard | where I can | 14:27 |
*** lborda has joined #openstack | 14:27 | |
kbringard | I have a lot of random notes and stuff I've written down about things that I've been a terrible person about and not put in wikis | 14:28 |
*** javiF has joined #openstack | 14:28 | |
annegentle | kbringard: terrible person! :) | 14:29 |
*** duffman has joined #openstack | 14:29 | |
kbringard | haha, I know! no need to rub it in | 14:29 |
kbringard | :-p | 14:29 |
annegentle | :) | 14:29 |
*** mattray has joined #openstack | 14:30 | |
annegentle | I have been trying to determine a good week. I guess after Aug. 22nd would be good to sprint. | 14:30 |
kbringard | seems reasonable to me | 14:30 |
kbringard | I'll be on vacation the 19th-24th | 14:30 |
kbringard | not that you should plan sprints around me | 14:30 |
kbringard | haha | 14:30 |
annegentle | :) | 14:30 |
kbringard | haha, or perhaps planning them when I'm not around to mess all the docs up is a good idea | 14:30 |
*** dobber has quit IRC | 14:31 | |
kbringard | I'm like Fry's worms on that Parasites Lost episode of Futurama: I think I'm making things better, but as it turns out, you've just got worms | 14:31 |
uvirtbot | New bug: #825241 in nova "SQLAlchemy + Postgres + Eventlet" [Undecided,New] https://launchpad.net/bugs/825241 | 14:31 |
kbringard | brb, need coffee | 14:32 |
*** anp has quit IRC | 14:33 | |
*** npmapn has joined #openstack | 14:33 | |
alekibango | annegentle: unfortunatelly i cant now... maybe if i would get paid... i need to earn some $$ this month | 14:35 |
*** llang629_ has left #openstack | 14:38 | |
alekibango | was playing with clouds for too long for free :) | 14:38 |
*** reed has joined #openstack | 14:39 | |
*** dendrobates is now known as dendro-afk | 14:43 | |
*** dendro-afk is now known as dendrobates | 14:43 | |
annegentle | alekibango: totally understand :) | 14:43 |
*** siwos has quit IRC | 14:49 | |
*** lborda has quit IRC | 14:51 | |
*** rnirmal has joined #openstack | 14:51 | |
*** nmistry has joined #openstack | 14:51 | |
*** mfer has joined #openstack | 14:55 | |
*** oziaczek has quit IRC | 15:02 | |
*** dragondm has joined #openstack | 15:04 | |
*** SCR512 has joined #openstack | 15:05 | |
*** cp16net has joined #openstack | 15:07 | |
*** alandman has joined #openstack | 15:10 | |
*** odyi has quit IRC | 15:11 | |
*** nmistry has quit IRC | 15:12 | |
*** odyi has joined #openstack | 15:12 | |
*** odyi has joined #openstack | 15:12 | |
SCR512 | Any one experience issues with the system router VM not properly handing out DHCP address to instances? | 15:13 |
*** jkoelker has quit IRC | 15:15 | |
*** jkoelker has joined #openstack | 15:15 | |
*** jimbob5 has joined #openstack | 15:16 | |
*** jimbob5 has quit IRC | 15:17 | |
*** jkoelker has quit IRC | 15:18 | |
*** SCR512 has left #openstack | 15:18 | |
*** nati has quit IRC | 15:19 | |
*** jkoelker has joined #openstack | 15:19 | |
*** Gordonz has joined #openstack | 15:21 | |
uvirtbot | New bug: #825269 in nova "EC2 API: terminated instances still show up when describing instnaces " [Medium,New] https://launchpad.net/bugs/825269 | 15:21 |
kbringard | that bug is a dupe | 15:25 |
*** javiF has quit IRC | 15:26 | |
*** truijllo has quit IRC | 15:26 | |
*** heckj has joined #openstack | 15:28 | |
*** guigui has quit IRC | 15:28 | |
*** vladimir3p has joined #openstack | 15:29 | |
*** heckj has quit IRC | 15:29 | |
viraptor | can nova-manage be used to downgrade the schema in any way? If I do sync --version=something_older I get an exception only | 15:30 |
annegentle | viraptor: not that I know of, but that's not a bad idea, maybe log it as a request | 15:32 |
annegentle | viraptor: and someone can say whether it's technically feasibl | 15:33 |
annegentle | feasible even | 15:33 |
viraptor | the code for downgrade is there in the migrations already | 15:33 |
viraptor | but sync db doesn't call the right function it seems | 15:33 |
*** nati has joined #openstack | 15:37 | |
nhm | Have any of you guys tried using the glusterfs connector? | 15:38 |
kbringard | nhm: sorry, nope | 15:39 |
nhm | kbringard: trying to decide what to use for a storage backend. | 15:39 |
kbringard | yea, me too | 15:40 |
*** heckj has joined #openstack | 15:40 | |
nhm | kbringard: what have you been considering? | 15:41 |
kbringard | are you talking like S3 style? | 15:41 |
kbringard | or do you mean like, for shared instance directories and stuff | 15:41 |
nhm | kbringard: probably both | 15:41 |
kbringard | we'll still working on the S3 stuff | 15:41 |
kbringard | been looking at Riak | 15:41 |
kbringard | supposedly they're working on an S3 compliant API | 15:42 |
kbringard | and, of course, swift | 15:42 |
nhm | I'm using swift now. | 15:42 |
*** CloudAche84 has quit IRC | 15:43 | |
kbringard | I'm actually working on getting a cluster setup as we speak | 15:43 |
nhm | kbringard: Cool, I've had a little 14-node test cluster running for a couple of months. Over all it does well except that I desperately need storage. | 15:44 |
nhm | We just got a nice grant to build a production cluster so I need to start getting serious. :) | 15:44 |
kbringard | I've not had a ton of personal experience with gluster, so I don't know if this is just me talking out of my ass | 15:44 |
*** nickon has quit IRC | 15:44 | |
kbringard | but, I've heard a few horror stories about gluster with lots of small files | 15:45 |
*** jtanner has quit IRC | 15:45 | |
uvirtbot | New bug: #825288 in nova "Kernel Panic when start instance in Xen Environment" [Undecided,New] https://launchpad.net/bugs/825288 | 15:47 |
*** jtanner has joined #openstack | 15:47 | |
creiht | If anyone gets the object storage layer working on gluster, I would be interested in hearing their experiences | 15:48 |
*** dgags has joined #openstack | 15:49 | |
kbringard | hah, creiht, ou and nhm should chat | 15:50 |
nhm | kbringard: we ran it briefly on one of our supercomputers, but due to some problems that may or may not have been gluster related ended up moving to lustre. | 15:50 |
nhm | creiht: I'm probably going to be giving it a try some time soonish. I've got a 500TB lustre depoyment I have to do in 2 weeks, but I'm hoping to squeeze it in. | 15:51 |
creiht | only so much time in a day :) | 15:52 |
nhm | tell me about it! | 15:52 |
creiht | nhm: hah, I've heard just as many horror storied about lustre :) | 15:52 |
creiht | stories | 15:53 |
nhm | creiht: oh lustre is a beast certainly. :) | 15:53 |
*** willaerk has quit IRC | 15:54 | |
nhm | It's problems are more about maintainability though. It works, it just likes to stab you. Repeatedly. | 15:54 |
nhm | In the eye. | 15:54 |
creiht | hah | 15:55 |
*** denken has joined #openstack | 15:55 | |
nhm | creiht: So one thing I'm still not very clear on is whether glusterfs with openstack gives you any real benefits over swift/nfs/etc. | 15:57 |
nhm | I read through the install guide and it looked like they were just using the client on the compute nodes to share the VM images? | 15:58 |
notmyname | nhm: I've been trying to track down some gluster people to get some clarity on what their new connector actually is/does | 15:58 |
*** mrrk has joined #openstack | 15:59 | |
*** mgius has joined #openstack | 15:59 | |
*** huslage has quit IRC | 16:01 | |
*** ewindisch has quit IRC | 16:02 | |
nhm | notmyname: yeah, I've been out of the loop for too long to know if the features they list are unique to using glusterfs or would work with NFS backed storage mounted on the compute nodes. | 16:02 |
nhm | or with swift for that matter. | 16:02 |
*** jtanner has quit IRC | 16:02 | |
*** jtanner has joined #openstack | 16:03 | |
*** lpatil has joined #openstack | 16:03 | |
*** rchavik has quit IRC | 16:05 | |
creiht | hrm | 16:06 |
creiht | as pointed out by a friend, triggering replication on gluster is, um, interesting | 16:06 |
*** nphase has joined #openstack | 16:06 | |
creiht | http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Triggering_Self-Heal_on_Replicate | 16:06 |
creiht | it is a different beast though | 16:09 |
nhm | creiht: yeah, I was reading about some... issues with replication. | 16:09 |
creiht | heh | 16:10 |
*** dolph_ has joined #openstack | 16:10 | |
nhm | notmyname: yeah, I've been out of the loop for too long to know if the features they list are unique to using glusterfs or would work with NFS backed storage mounted on the compute nodes.http://gluster.com/community/documentation/index.php/3.3beta | 16:10 |
nhm | doh, sorry | 16:10 |
nhm | apparently they are going for unified object storage in 3.3: http://gluster.com/community/documentation/index.php/3.3beta | 16:11 |
creiht | nhm: yeah I looked at it briefly, and it looks like it is mostly a hacked up version of swift on top of their volumes | 16:14 |
*** dendrobates is now known as dendro-afk | 16:16 | |
nhm | creiht: yeah, that's kind of what it looks like. I wonder how stable it it. ;) | 16:19 |
nhm | Anyone played around with keystone yet? | 16:19 |
heckj | we've been working with keystone (on and off) for the past several weeks | 16:19 |
nhm | heckj: what are your impressions? | 16:20 |
heckj | Off it more recently (since d3 milestone), there's a lot of motion on it. | 16:20 |
dolph_ | nhm, heckj: feedback encouraged -keystone dev | 16:20 |
notmyname | I've messed with it enough to run it on my personal swift dev environment | 16:20 |
heckj | It's got the basics of AuthN, flirts with AuthZ, and has a service catalog component that takes a while to understand how to set up | 16:20 |
notmyname | dolph_: it allows anyone any access as long as you have a valid token | 16:20 |
heckj | docs on setting it up need work - but once you get it running, does it's job | 16:21 |
dolph_ | heckj, in a meeting working on the service catalog api *right now* | 16:21 |
heckj | dolph_: how to set it up with the keystone-manage commands has been the biggest hurdle - just not clear what does what and how it impacts things | 16:21 |
mgius | I've gone in and executed SQL by hand a couple times because doing things through keystone-manage was too confusing | 16:22 |
heckj | nhm: the key for us (we were actively integrating swift into dashboard) was to get the URL endpoints correct from the sample-data | 16:22 |
dolph_ | heckj, i'm not a fan of keystone-manage at all... i think everything needs to work through the api after bootstrap config | 16:22 |
nhm | dolph_: haven't touched it yet, but will probably do so soon. Just had a request to tie into a radius backend. O_O | 16:23 |
heckj | dolph_: would love that! I'm a huge fan of a well documented, and ideally simple, API | 16:23 |
dolph_ | heckj, endpoints are a strange topic due to rackspace vs openstack -- i think we have some ideas to simplify them though | 16:23 |
nhm | doph_: integration into dashboard would be fantastic. | 16:23 |
heckj | today I'm tracking down a bug with sqlalchemy-migrate on Ubuntu 11.04 - kicked my butt yesterday with a PPA install from trunk | 16:24 |
dolph_ | nhm, a radius backend for keystone? | 16:24 |
nhm | dolph_: They don't know anything about keystone. More like "It'd be cool if we could set up a radius server and use it to target multiple backends for auth!" | 16:25 |
mgius | mmmm....swift packaging bug | 16:26 |
WormMan | today's project is to install a trunk version on a clean system, then if that works to show my coworkers how it works in the next week or two | 16:26 |
dolph_ | nhm, keystone is definitely positioned to be that solution... just don't have a radius implementation yet :P | 16:26 |
gnu111 | i need to remove some compute nodes. where are these stored? I can't find in the mysql db/ | 16:26 |
nhm | dolph_: I don't really want to setup a radius server anyway. ;) | 16:26 |
dolph_ | nhm, ha | 16:27 |
nhm | dolph_: Though ldap is definitely useful... so long as I don't have to get into a poltical battle about being able to modify it. ;) | 16:27 |
dolph_ | nhm, modifying the ldap backend code? | 16:27 |
nhm | dolph_: no, the tree. | 16:28 |
*** nati has quit IRC | 16:28 | |
heckj | WormMan: stick with Ubuntu 10.10 for now if you're doing a package based install | 16:28 |
dolph_ | nhm, ah :) | 16:28 |
heckj | nhm: been there! | 16:28 |
nhm | dolph_: The group here that controls the ldap servers are rather protective of their turf. :) | 16:29 |
dolph_ | nhm, as for dashboard, that's coming up pretty soon... keystone will probably go core right before dashboard does, which means dashboard will have to support keystone pretty quick | 16:29 |
nhm | heckj: indeed. Actually, that was why radius was suggested. Apparently you can do some kind of proxying or something and pass through to an existing ldap server without touching it. | 16:30 |
*** ewindisch has joined #openstack | 16:30 | |
nhm | dolph_: any rough estimates for when that might happen? | 16:31 |
dolph_ | nhm, rough estimate... i could see it happen within 6 weeks | 16:31 |
dolph_ | nhm, it's a discussion we'll probably start up next week | 16:31 |
nhm | dolph_: excellent. We got a grant to deploy openstack. :) | 16:31 |
dolph_ | nhm, nice! grats | 16:32 |
nhm | dolph_: Thanks, two year project. about $170k for hardware. | 16:32 |
dolph_ | nhm, O_O | 16:32 |
nhm | Mostly supporting proteomics and bioinformatics research. | 16:32 |
dolph_ | *woosh* | 16:32 |
creiht | nhm: cool | 16:33 |
nhm | dolph_: basically looking for proteins that have certain characteristics that might help with creating new medicines. | 16:33 |
nhm | or the pressence of certain proteins that might help explain why certain illnesses behave the way they do. | 16:34 |
dolph_ | nhm, ah, like folding at home! </geek> | 16:34 |
jtanner | nhm, what made you choose a "cloud" type infra instead a traditional compute cluster? | 16:35 |
nhm | dolph: yeah, folding basically is simulating how the structure works. This is using other techniques to try to identify proteins and if there is an unknown one guess as to what it does based on how similar it looks to known ones. | 16:35 |
*** katkee has quit IRC | 16:35 | |
*** ewindisch has quit IRC | 16:36 | |
nhm | jtanner: A lot of bio people are kind of skipping clusters and going directly to amazon so some of the software is already being distributed via AMIs. | 16:37 |
jtanner | i see | 16:37 |
nhm | jtanner: Also, buzzwords win grants. ;P | 16:37 |
jtanner | nhm, no doubt. | 16:37 |
*** HugoKuo has joined #openstack | 16:38 | |
*** jdurgin has joined #openstack | 16:38 | |
*** ewindisch has joined #openstack | 16:38 | |
nhm | potentially another really cool possibility is being able to preserve the exact environment in which the analytics were done. | 16:38 |
*** anotherjesse has joined #openstack | 16:39 | |
dilemma | nhm: as in, "store the server" that was done to compute a particular item as a VM image, and when that item becomes interesting later, fire up the server, and you're guaranteed the same computing environment? | 16:40 |
nhm | dilemma: yeah. Or if you want to reproduce results. | 16:41 |
*** obino has quit IRC | 16:41 | |
*** irahgel has quit IRC | 16:41 | |
*** tsuzuki has joined #openstack | 16:41 | |
nhm | or even just collaborate on research with other people and make sure you are using the same environment even if you are located at different sites. | 16:42 |
heckj | nhm: when someone is processing folding, is there a lot of IO associated with it (like sequence matching?) or is it mostly internal compute? | 16:42 |
jtanner | should be ram+cpu | 16:42 |
mgius | folding@home was all cpu and sometimes heavy ram | 16:43 |
dilemma | he's not just talking about folding though: https://www.msi.umn.edu/ | 16:43 |
*** lborda has joined #openstack | 16:45 | |
nhm | Yeah, people here do many different kinds of research. | 16:45 |
jtanner | nhm, any plans for using condor? | 16:47 |
nhm | jtanner: we've flirted with it off and on. | 16:47 |
nhm | jtanner: I actually had it setup on some test hardware about a year ago. | 16:48 |
jtanner | nhm, will your implementation be documented publicly? | 16:48 |
jtanner | of openstack | 16:48 |
nhm | jtanner: I'd like to. I think it would be good PR for us. | 16:49 |
jtanner | and for openstack | 16:49 |
nhm | jtanner: indeed | 16:49 |
*** javiF has joined #openstack | 16:49 | |
nhm | jtanner: maybe for whatever vendor we go with too. | 16:49 |
jtanner | i wonder how much dell charges for their crowbar setup | 16:50 |
nhm | Lots of work to do though. I need to figure out how to make all of the stars line up right regarding Auth, PCIE passthrough (for GPU nodes), Storage, hardware, etc etc. | 16:50 |
nhm | jtanner: I just emailed our Dell rep this morning. Waiting to hear back. | 16:51 |
jtanner | are you guys okay with being restricted to ubuntu? | 16:51 |
uvirtbot | New bug: #825338 in swift "Existing "swift" user modified on package install" [Undecided,New] https://launchpad.net/bugs/825338 | 16:51 |
nhm | jtanner: I've fought that battle a bit. We'll use Ubuntu for the nodes, and I'll probably provide SL6.1 or CentOS6 as a VM option. | 16:52 |
nhm | Though the folks at the Mayo Clinic are going to be building VM images as part of this grant too. | 16:52 |
jtanner | it shouldn't be too hard to slap rhel/cent into crowbar | 16:52 |
dilemma | yeah, SL6 / CentOS6 support would have made my life a lot easier as well | 16:52 |
jtanner | it seems like most of the barhandles already have deb+rpm support | 16:52 |
jtanner | i haven't tried it though | 16:53 |
BK_man | nhm: if your nodes will be from top vendors you probably need RHEL instead of Ubuntu | 16:53 |
nhm | jtanner: you've defintiely looked into it more than I have. My stuff is all Ubuntu+Kickstart+Puppe | 16:53 |
nhm | s/Puppe/Puppet | 16:53 |
BK_man | nhm: how many nodes is your target? | 16:53 |
nhm | BK_man: depends how many fat nodes we need. Probably 24-32ish. | 16:54 |
BK_man | dilemma: you mean SL/RHEL for hosts or for instances? | 16:54 |
dilemma | hosts | 16:54 |
nhm | BK_man: I'd like to have heterogenous nodes to support different workloads. | 16:54 |
BK_man | dilemma: http://yum.griddynamics.net/yum/diablo-3/openstack | 16:54 |
BK_man | dilemma: just passed last QA session.Ready to install and use | 16:54 |
kbringard | quick ? about swift | 16:55 |
*** ccc11 has quit IRC | 16:55 | |
BK_man | nhm: which tool will you use for deployment of OS and OpenStack on bare metal? | 16:55 |
kbringard | the docs say: | 16:55 |
kbringard | Publish the local network IP address for use by scripts found later in this documentation: | 16:55 |
dilemma | BK_man: QA by griddynamics? | 16:55 |
kbringard | export STORAGE_LOCAL_NET_IP=10.1.2.3 | 16:55 |
kbringard | export PROXY_LOCAL_NET_IP=10.1.2.4 | 16:55 |
BK_man | dilemma: yep. | 16:55 |
kbringard | I'm not sure what each of those are | 16:55 |
uvirtbot | New bug: #825344 in openstack-ci "project watches incorrectly added to gerrit" [Undecided,New] https://launchpad.net/bugs/825344 | 16:56 |
nhm | BK_man: For my test cluster I'm using tftpboot/kickstart/puppet. Thinking of using cobbler to tie it together but it's working pretty well as is. | 16:56 |
dilemma | BK_man: sadly, "official" support for RHEL was required by the decision-makers around here for us to use it | 16:56 |
*** mo has joined #openstack | 16:57 | |
BK_man | nhm: you will need hw management solution. For RHEL-based OSes I recommend xCAT http://xcat.sf.net | 16:57 |
*** mo is now known as Guest44459 | 16:57 | |
nhm | BK_man: We use xCAT on some of our clusters. It's alright. | 16:57 |
BK_man | dilemma: just use our RPMs. I got report that working on SL 6.1 without major issues | 16:58 |
*** Guest44459 has quit IRC | 16:58 | |
BK_man | nhm: if your HW is IMPI 2.0 | 16:58 |
nhm | BK_man: we are currently a SLES shop, but will probably be moving away from that at some point. | 16:58 |
BK_man | nhm: IPMI 2.0 complaint you should have no problems installing your nodes using xCAT | 16:58 |
nhm | sadly our UV1000 will probably be SLES for it's whole life. | 16:58 |
*** Guest44459 has joined #openstack | 16:59 | |
*** mszilagyi has joined #openstack | 16:59 | |
dilemma | I've already setup a small test cluster using your RPMs. I was forced to re-kick them with ubuntu due to concerns with continued RHEL support. | 16:59 |
BK_man | dilemma: switch to SL 6.1 | 17:00 |
BK_man | dilemma: we're going to support it in near future (1 month) | 17:00 |
dilemma | "we" being upstream openstack? | 17:00 |
nhm | Hopefully government funding cuts won't affect SL. | 17:00 |
*** tsuzuki has quit IRC | 17:00 | |
*** j05h has quit IRC | 17:01 | |
BK_man | we means Grid Dynamics | 17:01 |
nhm | One of my coworkers used to work for Fermi. He's expressed cocnern. | 17:01 |
BK_man | I'm employed by Grid Dynamics and we're support RHEL build of OpenStack projects - Nova, Swift, Glance and now Keystone and Dashboard | 17:02 |
HugoKuo | does keystone easy to implement ? | 17:02 |
lorin1 | nhm: We're using OpenStack on a cluster with a UV100 and some nodes with GPUs. | 17:02 |
*** YorikSar has joined #openstack | 17:02 | |
BK_man | HugoKuo: not yet, but we're working to fix that soon :) | 17:03 |
nhm | lorin1: wow, that's crazy. :) | 17:03 |
*** johnmark has joined #openstack | 17:03 | |
dolph_ | HugoKuo, if it's not, let me know :) | 17:03 |
nhm | lorin1: One of our UV100s has GPUs connected to it. Recent kernel upgrade broke GPU access inside cpusets. :( | 17:03 |
dilemma | BK_man: right. I've been through your documentation, and see that your company contributes significantly to openstack, and whatnot. But the decision makers here are concerned with the fact that RHEL is not supported upstream. | 17:03 |
nhm | lorin1: how many cores in your UV100? | 17:03 |
*** duffman has quit IRC | 17:04 | |
lorin1 | nhm: 128 | 17:04 |
dilemma | In any case, we're too late in the deployment stage to switch our host OS now | 17:04 |
*** duffman has joined #openstack | 17:04 | |
HugoKuo | dolph_ : ok .... but i'm in vocation until the end of this month XD | 17:04 |
lorin1 | nhm: We were running SLES but recently upgraded to RHEL 6.1. | 17:04 |
nilsson | lorin1, are you using xen or kvm ? | 17:04 |
nhm | lorin1: Yeah, I saw that they are supporting RHEL now, but I think there is a limit on the number of cores. | 17:04 |
BK_man | dilemma: upstream is interested in RHEL RPMs too. | 17:04 |
dolph_ | HugoKuo, keystone will be easier by the end of the month! | 17:04 |
lorin1 | nilsson: We were using kvm, we're playing with lxc right now. | 17:04 |
kbringard | BK_man: does that mean you're supporting Cent 6, or just RHEL and my milage may vary? | 17:05 |
BK_man | kbringard: we're going to support CentOS also. So, RHEL, SL and CentOS _at least_ | 17:05 |
lorin1 | nhm: There is a guy on our team working on the issue with supporting higher number of cores. I think he had to switch to a newer kernel. | 17:06 |
HugoKuo | dolph_ , that's great XD | 17:06 |
lorin1 | nhm: How many cores in your UV1000? | 17:06 |
nhm | lorin1: our UV1000 has 1104 and our UV100s have 72 each. | 17:06 |
BK_man | kbringard: but we need to wait for CentOS 6.1 to release since it contains newer libguestfs which we need | 17:06 |
kbringard | yea, makes sense | 17:06 |
kbringard | by "support", are you contracting out support, or do you just mean you're going to keep building packages? | 17:06 |
*** obino has joined #openstack | 17:06 | |
WormMan | ahh, centos 6.1... AKA 2012 | 17:07 |
YorikSar | dolph_: Hello. Have you seen my mail? Does that change looks good? | 17:07 |
kbringard | WormMan: if we're lucky | 17:07 |
WormMan | I like Ubuntu, but it's also a bit annoying. 10.04 has LTS, but no newer/faster/better performing KVM. Newer stuff, shorter support, better performance. | 17:07 |
BK_man | kbringard: we're doing both. Build packages for community and providing contractual support for our customes. Drop a message to cloudservices@griddynamics.com if you have a business request. | 17:08 |
kbringard | WormMan: yea, but you'll always have the newest <insert obscure package name here> | 17:08 |
*** stewart has joined #openstack | 17:08 | |
kbringard | BK_man: awesome, thanks | 17:08 |
nhm | WormMan: Yeah, the last LTS is getting a bit long in the tooth. | 17:08 |
*** maplebed has quit IRC | 17:09 | |
*** maplebed_ has joined #openstack | 17:09 | |
WormMan | of course, with the rapid pace of openstack, I'm not sure about trying to keep anything stable | 17:09 |
BK_man | WormMan: I prefer SL 6.1 rather CentOS for now :) | 17:10 |
WormMan | we're a CentOS shop, suggesting Ubuntu for anything was already met with torches and pitchforks :) | 17:11 |
nhm | WormMan: I know how that goes. :) | 17:12 |
kbringard | so were we | 17:12 |
*** javiF has quit IRC | 17:12 | |
dilemma | WormMan: exactly what happened around here | 17:13 |
nhm | WormMan: though realistically Ubuntu and clusters historically haven't mixed well. For openstack though, it's a different story. | 17:13 |
*** mattray has quit IRC | 17:13 | |
*** BK_man has quit IRC | 17:14 | |
WormMan | I've been doing Linux for a long time, I don't much care what OS it is... as long as it's not SuSE :) | 17:14 |
nhm | WormMan: that's what we run. ;) | 17:14 |
WormMan | (or SLS) | 17:14 |
WormMan | (or MCC Interim) | 17:14 |
nhm | slackware cluster. ;) | 17:14 |
creiht | gentoo! | 17:15 |
dilemma | Interestingly, since I'm only deploying openstack swift, I shouldn't have any problem slowly rekicking the cluster with a new OS in the future. Oh god, they may ask me to do that when CentOS6.1 is out and supported | 17:15 |
* creiht hides | 17:15 | |
creiht | :) | 17:15 |
*** obino has quit IRC | 17:15 | |
*** obino has joined #openstack | 17:15 | |
kbringard | so can anyone explain to me what the storage local net and the proxy local net IPs for swift are supposed to be? | 17:15 |
nhm | dilemma: you can have 100% uptime! | 17:15 |
creiht | kbringard: a bit of a carry over from rackspace | 17:16 |
creiht | kbringard: is this for stats | 17:16 |
creiht | ? | 17:16 |
*** maplebed_ is now known as maplebed | 17:16 | |
dilemma | nhm: that's true. I could preserve the XFS drives on the storage nodes, and not even resync any data | 17:16 |
kbringard | not exactly… I'm just diving into setting up swift for the first time | 17:16 |
YorikSar | dolph_: Oh, sorry. I've opened the other one. I see your lgtm under that, which I linked to in email | 17:16 |
kbringard | and I get the whole internal/external network thing | 17:16 |
creiht | oh from the docs | 17:17 |
kbringard | yea | 17:17 |
kbringard | I'm just not sure what parts of my setup each of those correspond to | 17:17 |
YorikSar | dolph_: Can you look at the change 222 as well, please? | 17:17 |
dolph_ | YorikSar, i've been in a meeting all day - 'import ldap' looked great (don't know why i had problems with it!), and i haven't had a chance to look at your other changes yet | 17:18 |
*** ccustine has joined #openstack | 17:18 | |
YorikSar | dolph_: We're just hope that these changes can land to master this week so that we can tell the world how to use it. | 17:18 |
creiht | kbringard: so usually for a swift cluster we setup the storage nodes on a private network, that isn't accesible by the outside | 17:18 |
creiht | so when it mentions the STORAGE_LOCAL_NET_IP, it is talking about the ip of that storage node on that private network | 17:19 |
dolph_ | YorikSar, give me a couple hours, max | 17:19 |
kbringard | ahhh, ok, so that will vary from node to node | 17:19 |
creiht | right | 17:19 |
kbringard | and the proxy_local_net will be the same, since there is only one proxy | 17:20 |
kbringard | proxy_local_net_ip I mean | 17:20 |
creiht | the PROXY_LOCAL_NET_IP is the public network IP for the proxy (or load balancer VIP if you are going to have several proxies) | 17:20 |
YorikSar | dolph_: Thanks a lot for your help. | 17:20 |
creiht | yeah | 17:20 |
dolph_ | YorikSar, thanks for your contribs! | 17:21 |
*** mrjazzcat is now known as mrjazzcat-afk | 17:22 | |
*** ewindisch has quit IRC | 17:23 | |
*** marrusl has quit IRC | 17:23 | |
*** lpatil has quit IRC | 17:23 | |
kbringard | creiht: sweet, thanks, that helps a lot | 17:23 |
*** darraghb has quit IRC | 17:24 | |
creiht | cool | 17:24 |
*** Alowishus has left #openstack | 17:24 | |
kbringard | the doc is a little confusing because they're on the same broadcast domain | 17:24 |
creiht | heh | 17:24 |
*** dolph_ has quit IRC | 17:24 | |
creiht | good point | 17:24 |
kbringard | so I was like "If one is internal and one is external… why are they 1 IP apart" | 17:25 |
*** amccabe has quit IRC | 17:25 | |
kbringard | no worries though, this makes more sense now, thanks again | 17:25 |
dilemma | anyone have any input on putting anycast in front of openstack swift instead of a load balancer? | 17:25 |
*** dendro-afk is now known as dendrobates | 17:26 | |
creiht | pandemicsyn: -^ ? | 17:27 |
creiht | dilemma: at that point you are getting a bit out of my areas of expertise :) | 17:27 |
*** ewindisch has joined #openstack | 17:27 | |
kbringard | creiht: since I'm just doing this for testing and it's all internal, I can make those the same, right? | 17:28 |
creiht | kbringard: sure | 17:29 |
kbringard | OK, cool | 17:29 |
kbringard | didn't know if that'd make it crap itself or something | 17:29 |
creiht | shouldn't | 17:29 |
kbringard | rad, I'll let you know | 17:29 |
kbringard | haha | 17:29 |
redbo | why would you put anycast in front of swift? | 17:29 |
*** jtanner has quit IRC | 17:30 | |
dilemma | to avoid the expense of a load balancer (and it's correspondingly massive throughput requirements) | 17:30 |
dilemma | and, depending on network topology, reduce traffic between some switches within or between some data centers | 17:31 |
*** rfz_ has joined #openstack | 17:31 | |
dilemma | if swift had a mechanism to retrieve the nearest copy, I could improve that even further | 17:32 |
*** YorikSar has quit IRC | 17:35 | |
*** marrusl has joined #openstack | 17:37 | |
*** pguth66 has joined #openstack | 17:38 | |
redbo | Oh, yeah it might make sense to do anycast if you have multiple DCs. I don't know if anyone's tried that though. | 17:38 |
* exlt has thought about it ;-) | 17:39 | |
*** Guest44459 has quit IRC | 17:39 | |
*** amccabe has joined #openstack | 17:39 | |
*** YorikSar has joined #openstack | 17:39 | |
exlt | when talking about PUTs, the various data centers would need to talk to a central auth for validation, or auth could also be anycasted | 17:40 |
dilemma | central auth exists, in my case | 17:40 |
exlt | and if that central auth is not available... anycast is useless | 17:41 |
dilemma | I'm actually writing the WSGI middleware for it at this exact moment | 17:41 |
redbo | I don't think auth is really enough traffic to worry about usually. | 17:42 |
exlt | traffic is not the problem with anycast - getting authenticated in each data center is | 17:42 |
*** lpatil has joined #openstack | 17:42 | |
dilemma | so long as sessions are stored outside of the proxies, you're fine | 17:43 |
dilemma | and if the proxies are storing your session data locally, you're doing it wrong | 17:43 |
exlt | anycast only for public reads would be the best use case for anycast, I think | 17:43 |
*** HugoKuo1 has joined #openstack | 17:44 | |
dilemma | What specifically would be the problem authenticating with a centralized auth system from proxies behind anycast? | 17:44 |
redbo | I don't see how anycast and auth really interact. | 17:44 |
exlt | write to one location and replicate to other locations keeps things simple | 17:45 |
dilemma | redbo: I agree. | 17:45 |
*** HugoKuo has quit IRC | 17:45 | |
redbo | other than... maybe if you're really worried about latency, you could replicate your user db at every anycast endpoint. | 17:46 |
redbo | or every DC rather | 17:46 |
dilemma | or just cache sessions and user data in memcached on the proxies | 17:47 |
*** hggdh has quit IRC | 17:47 | |
*** YorikSar_ has joined #openstack | 17:48 | |
redbo | well sure | 17:48 |
*** YorikSar has quit IRC | 17:48 | |
*** YorikSar_ is now known as YorikSar | 17:51 | |
exlt | my thought for an example: bigsite.org uses anycasted swift (2 locations) with one auth (location A) to store user photos - location A becomes unavailable (for whatever reason) - the one auth is no longer reachable so all uploads fail | 17:53 |
exlt | caching only helps for a previously authed user in location B | 17:54 |
*** HugoKuo1 has left #openstack | 17:54 | |
exlt | so not all uploads fail, but any new attempts will | 17:55 |
exlt | if this is acceptable, that's cool | 17:55 |
*** hggdh has joined #openstack | 17:56 | |
exlt | otherwise, each location should have a local auth, also anycasted - it all depends on how bulletproof it needs to be | 17:56 |
dilemma | ahh, right. So you're saying auth availability becomes a concern | 17:56 |
exlt | availability is the secondary reason for anycast - first is get closer to the user | 17:56 |
dilemma | Yeah, our auth system here is entirely separate, and has it's own redundancy | 17:56 |
dilemma | For my purposes, it's safe to assume that auth is always up. It's a separately managed system that I'm integrating with. | 17:57 |
*** huslage has joined #openstack | 17:58 | |
dilemma | and around here, the auth system is far more important than the openstack swift deployment. If auth goes down, people are getting called at 4am on christmas morning to fix it. | 18:00 |
exlt | been there - I completely understand :-) | 18:00 |
dilemma | unfortunately, some of the benefits of anycast will be lost, due to the fact that the proxies pick a random copy from the storage nodes to serve up | 18:02 |
dilemma | if my nodes and proxies are scattered across data centers, the initial request will go to the nearest proxy, but the copy of the requested data could come from any node | 18:02 |
dilemma | creiht or anyone: know if there's plans to have the proxies consider network topology before pulling copies? | 18:03 |
redbo | I don't think hacking something like a preferred zone list into the ring would be all that epic. | 18:03 |
dilemma | exactly what I was just about to suggest | 18:04 |
dilemma | zone priority, on a per-proxy basis | 18:04 |
creiht | dilemma: we have talked about it before | 18:04 |
creiht | but I don't think we have gone much beyond that because for the moment it works good enough, without the added complexity :) | 18:05 |
redbo | it would really just have to order the nodes it returned so the one in the preferred list is first. | 18:05 |
dilemma | would a solid code contribution be a good motivator for that discussion? | 18:05 |
creiht | dilemma: certainly | 18:05 |
dilemma | if you can point me to a couple of integration points, I could probably get the go-ahead from my employer to look at it | 18:06 |
dilemma | cross-dc traffic was a big topic of discussion here | 18:06 |
dilemma | if it has to reach into too much code, probably not though. If it's fairly clean modification (replace a single object? hell yeah.) I don't see why not. | 18:08 |
redbo | I think it could be as simple as new config option for preferred zones and a sort call where there's currently a shuffle to fakey load balance between object nodes. | 18:08 |
*** mattray has joined #openstack | 18:09 | |
dilemma | Yeah, I could see that. Wow, maybe I could have a patch ready by the end of next week. I'll be talking to my employer on Monday. | 18:09 |
*** dprince has quit IRC | 18:11 | |
*** dolph_ has joined #openstack | 18:11 | |
*** dprince has joined #openstack | 18:13 | |
dilemma | creiht: if it were approved, how quickly does stuff like this make it into the official packages? | 18:14 |
dilemma | I'm currently using the 1.3 ppa for ubuntu | 18:15 |
creiht | dilemma: usually pretty quickly, but not sure what the deadline is for the last release before diable | 18:15 |
creiht | notmyname: ? | 18:15 |
redbo | the official packages? I've identified a potential problem. | 18:16 |
notmyname | one more release (1.4.3) around the end of this month | 18:16 |
notmyname | 1.4.3 will be diablo | 18:16 |
notmyname | dilemma: creiht: redbo: I haven't been following along... | 18:17 |
notmyname | what's the tl;dr? | 18:17 |
*** kashyap_ has quit IRC | 18:17 | |
dilemma | I'm interested in submitting a patch to allow per-proxy zone priority in swift | 18:17 |
notmyname | ok. cool | 18:18 |
dilemma | and wondering if I can get it into official packages fast enough to use in my deployment without worrying my employer about using a patched openstack | 18:18 |
notmyname | dilemma: so proxy one writes to zones c,b, and a while prixy two writes to b, a, and c? | 18:18 |
dilemma | reads are the primary concern | 18:18 |
redbo | dilemma: I don't know, the Release PPA might be okay. | 18:19 |
dilemma | I don't want to change the way it distributes data | 18:19 |
notmyname | dilemma: so (as redbo points out) "official" packages are a curious case | 18:19 |
dilemma | not so official then? | 18:20 |
notmyname | dilemma: all swift releases are production-ready. currently, our milestones are different than the nova/glance milestones | 18:20 |
*** tjikkun has joined #openstack | 18:20 | |
*** tjikkun has joined #openstack | 18:20 | |
notmyname | each swift release (milestone or otherwise) is "official" from a swift perspective (as in, ready for production at scale). but there is only one official openstack release every 6 months | 18:21 |
*** herman__ has quit IRC | 18:21 | |
notmyname | because of the different nature of openstack packages, we (Rackspace) maintain our own swift packaging that we use for production (https://github.com/crashsite/swift_debian) | 18:21 |
notmyname | for the last 6 months (openstack's diablo cycle) we have tried to ensure that we release an official swift version every time we release in production. we've done pretty well, but it's not a 100% match | 18:23 |
notmyname | dilemma: so, all that being said, even if your patch doesn't make it for diablo, it can be in an "official" swift release soon after (so you can run an unpatched swift install) | 18:24 |
dilemma | well damn. That changes the game for me a bit. I was avoiding newer versions of swift, and lamenting the fact that I was probably going to miss diablo for my deployment. | 18:24 |
notmyname | really, it comes down to where you install from | 18:24 |
dilemma | yeah, I wasn't aware of that repository | 18:25 |
nhm | Heh, I'm still on cactus | 18:25 |
notmyname | it's not a secret, but also not something we trumpet around in the openstack world :-) | 18:25 |
*** clauden_ has joined #openstack | 18:25 | |
nhm | How are the diablo releases feeling so far? | 18:25 |
* dilemma updates his dev cluster | 18:26 | |
dilemma | my qa team is about to get some extra work :) | 18:26 |
*** vernhart has quit IRC | 18:26 | |
notmyname | heh | 18:26 |
nhm | dilemma: you have a QA team? wow | 18:27 |
nhm | dilemma: I sometimes have an undergraduate student if I'm lucky. ;) | 18:27 |
*** herman_ has joined #openstack | 18:27 | |
*** allsystemsarego has quit IRC | 18:28 | |
dilemma | and a dev team... too bad they're all perl guys | 18:28 |
nhm | but then their souls get crushed and we must feed on others. | 18:28 |
dilemma | which is why I'm tasked with the auth middleware | 18:28 |
nhm | dilemma: I'm an old perl guy. ;) | 18:28 |
* nhm strains his back and yells about kids and lawns | 18:29 | |
dilemma | I once tried perl. Then my soul was crushed, and the dev team had to feed on others. | 18:30 |
notmyname | dilemma: the crashsite repo is what we use for cloud files at rackspace | 18:30 |
dilemma | notmyname: that's great. I'll be switching everything over. I'll probably also be submitting a patch for per-proxy zone read preferences | 18:33 |
dilemma | as in, an optional preference that, if present, determines the order in which zones are preferred when retrieving an object from the storage nodes | 18:34 |
dilemma | you'll set weights on zones or something, and randomly chose from identical weights, with the default being all zones have identical weights | 18:35 |
dilemma | would that have a chance of being accepted? | 18:35 |
notmyname | dilemma: yes :-) | 18:36 |
notmyname | dilemma: make sure it has some docs and unit test coverage. and the code (not the tests) need to pass pep8 | 18:36 |
*** redkilian has quit IRC | 18:36 | |
*** mattray has quit IRC | 18:36 | |
dilemma | I can do that. My employer would have to approve the time I spend on it, of course. | 18:37 |
dolph_ | nhm: still around? | 18:38 |
notmyname | dilemma: we look forward to seeing it :-) | 18:38 |
nhm | dolph_: yep | 18:38 |
dolph_ | nhm, just heard some news... dashboard is working on keystone integration *right now* | 18:38 |
nhm | dolph_: sweet! :D | 18:39 |
YorikSar | dolph_: Haven't Dashboard had Keystone for like a month already? | 18:40 |
dolph_ | YorikSar, no clue - i heard this week that it was starting soon... and was just corrected | 18:40 |
dolph_ | YorikSar, i'd be curious to know how much progress they've made | 18:41 |
YorikSar | dolph_: Actually, the reason why I started to work on LDAP in Keystone is the fact that trunk version of Dashboard mandates Keystone auth | 18:41 |
nhm | YorikSar: Does LDAP in keystone require write access? | 18:43 |
dolph_ | YorikSar, oh awesome | 18:43 |
dolph_ | nhm, (require write access to what?) | 18:44 |
YorikSar | nhm: Yes, of course. | 18:44 |
YorikSar | nhm: Well, for admin interface at least. | 18:44 |
YorikSar | nhm: Current version of LDAP backend is not complete because it can not work with existing users and tenants trees | 18:45 |
dolph_ | YorikSar, does one of your changes fix that? | 18:46 |
YorikSar | I'm going to fix this on Monday | 18:46 |
YorikSar | dolph_: No, my change just forces it to work correctly with assumption that it has separate playground somewhere in LDAP. | 18:47 |
*** ewindisch has quit IRC | 18:48 | |
*** katkee has joined #openstack | 18:48 | |
nhm | YorikSar: Yeah, I'm trying to avoid having a fight with our LDAP admins. ;) | 18:48 |
YorikSar | dolph_: Oh, I see you pushed the second part to Jenkins, thanks! | 18:49 |
dolph_ | YorikSar, /salute! | 18:49 |
YorikSar | nhm: Just when Jenkins finishes merging this change into master, we'll publish a blogpost about how to make it work just as it is right now. | 18:50 |
*** alandman has quit IRC | 18:50 | |
*** mattray has joined #openstack | 18:53 | |
*** rfz_ has quit IRC | 18:54 | |
*** dprince has quit IRC | 18:57 | |
*** huslage has quit IRC | 18:59 | |
*** anotherjesse has quit IRC | 18:59 | |
*** lvaughn_ has quit IRC | 19:01 | |
*** lvaughn has joined #openstack | 19:01 | |
*** mgoldmann has quit IRC | 19:03 | |
*** lvaughn_ has joined #openstack | 19:05 | |
*** mrjazzcat-afk is now known as mrjazzcat | 19:06 | |
uvirtbot | New bug: #825419 in glance "Functional tests for private and shared images" [Undecided,New] https://launchpad.net/bugs/825419 | 19:06 |
*** rnorwood has joined #openstack | 19:08 | |
*** lvaughn has quit IRC | 19:08 | |
*** lvaughn_ has quit IRC | 19:08 | |
*** dragondm has quit IRC | 19:09 | |
*** lvaughn has joined #openstack | 19:09 | |
*** mrjazzcat has left #openstack | 19:13 | |
kbringard | anyone know how compatible the swift S3 api is at this point? | 19:13 |
*** stewart has quit IRC | 19:13 | |
creiht | kbringard: it should work for basic functionalities | 19:13 |
creiht | It doesn't support things like ACLs | 19:14 |
kbringard | are there plans for it to? | 19:14 |
creiht | It starts getting a little fuzzy there, since ACLs are a bit different between the two | 19:15 |
kbringard | sorry if this is documented somewhere, I googled around and surfed the docs for a bit but didn't see anything | 19:15 |
creiht | of course patches are welcomed :) | 19:15 |
kbringard | hehe, of course | 19:15 |
creiht | It could use some work | 19:15 |
creiht | I changed responsibilities right after getting that it | 19:15 |
creiht | so it hasn't been updated a whole lot, and never got a chance to add better docs | 19:16 |
creiht | kbringard: http://swift.openstack.org/misc.html#module-swift.common.middleware.swift3 | 19:16 |
creiht | Is the best at the moment | 19:16 |
kbringard | oh, awesome, thanks | 19:16 |
annegentle | and here's all the further I got with doc'cing it: http://docs.openstack.org/trunk/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html | 19:17 |
annegentle | creiht: I'll have to compare mine to yours, yours is better I'm sure :) | 19:17 |
kbringard | that sounds like a game we used to play in the playground | 19:17 |
kbringard | on* | 19:17 |
creiht | annegentle: oh cool, didn't realize you had snuck that in, thanks! :) | 19:17 |
* annegentle steals creiht's | 19:17 | |
kbringard | thank you both, this is super helpful | 19:18 |
creiht | annegentle: I think they could be merged | 19:18 |
annegentle | creiht: yeah looks like it | 19:18 |
creiht | I would leave the s3curl comments in there | 19:18 |
creiht | as that has come up several times | 19:18 |
creiht | kbringard: another small thing is that you have to use the old style api naming | 19:18 |
creiht | i.e., you can't reference the bucket in the domain name | 19:18 |
creiht | some tools don't support that | 19:19 |
creiht | or at least reported so | 19:19 |
creiht | kbringard: if you run into any glaring issues, let us know, or if there are features you would like at least submit a bug report | 19:20 |
creiht | My current work responsibilities don't provide the time to work on it anymore :/ | 19:21 |
kbringard | creiht: sure thing, and we'll submit patches if/when we can too | 19:22 |
creiht | kbringard: there have been a couple of contributors to it, so if if you file a bug, there is a small chance they may pick it up | 19:23 |
*** dendrobates is now known as dendro-afk | 19:25 | |
*** johnmark has left #openstack | 19:31 | |
*** Ryan_Lane has joined #openstack | 19:35 | |
kbringard | creiht: does swift work with keystone and/or are there plans to add it (if not)? | 19:36 |
kbringard | I would have to assume yes... | 19:36 |
notmyname | kbringard: actually, it's the other way around | 19:36 |
kbringard | oh? | 19:36 |
kbringard | keystone needs swift support? | 19:37 |
notmyname | kbringard: keystone needs to work with swift. there is limited support there now | 19:37 |
dolph_ | notmyname, what does keystone have to do to support swift, specifically?? | 19:37 |
kbringard | ah, cool… so you pass your universal creds to keystone, which talks to swift and authorizes you anre returns your token | 19:37 |
notmyname | kbringard: but it's not "production ready" (for example, it doesn't tie auth tokens to swift accounts, so any valid token gets authorized to do anything to any account) | 19:37 |
notmyname | kbringard: right | 19:37 |
kbringard | so I guess there'd have to be a section in your ldap (or whatever auth you're using) that has your keys for keystone to pull out | 19:38 |
kbringard | ? | 19:38 |
notmyname | I don't know the answer to that | 19:38 |
kbringard | hmm, an interesting problem to be sure | 19:38 |
notmyname | dolph_: the framework is there. just some...oddities (see my above comment to kbringard) | 19:39 |
*** reed has quit IRC | 19:40 | |
nilsson | are there any current billing apps that integrate with openstack? | 19:40 |
YorikSar | nhm: Here, the post is finally ready http://mirantis.blogspot.com/2011/08/ldap-identity-store-for-openstack.html | 19:40 |
*** msivanes has quit IRC | 19:41 | |
*** msivanes has joined #openstack | 19:41 | |
kbringard | notmyname: thanks for the info | 19:43 |
YorikSar | notmyname: I'm confused. What does Keystone needs special for Swift that it doesn't already have for Nova? | 19:45 |
dolph_ | notmyname, if i understand you, it's swift's responsibility to map keystone users to swift accounts, using keystone roles and keystone tenants | 19:45 |
YorikSar | dolph_: I think so too. | 19:45 |
notmyname | dolph_: in the auth middleware (that ships with keystone)? | 19:46 |
notmyname | dolph_: why does swift need to keep track of keystone users? | 19:47 |
*** bcwaldon has quit IRC | 19:47 | |
dolph_ | notmyname, i haven't looked much at the middleware, but the middleware doesn't do much afaik... it's all done through the keystone admin api | 19:47 |
YorikSar | notmyname: But eventually this middleware should land to each project separately, shouldn't it? | 19:47 |
*** jdurgin has quit IRC | 19:47 | |
*** carlp has quit IRC | 19:49 | |
nhm | YorikSar: that's great | 19:49 |
nhm | YorikSar: Now I need to upgrade and actually try all of this out. | 19:49 |
notmyname | YorikSar: actually, I think it belongs in the auth system. for example, if I have a different auth system, that other system should provide the swift/nova/etc glue with it | 19:50 |
YorikSar | notmyname: But I thought that it's service's responsibility to work with common default auth system. | 19:52 |
notmyname | dolph_: as an example, set up swift with keystone. get an auth token. use that token for _any_ swift account and it works. keystone is returning "authorized" regardless of the swift account | 19:53 |
YorikSar | notmyname: And by the way how can we have a complete package for a service without the auth middleware? | 19:54 |
dilemma | YorikSar: because when using that service stand-alone, you'll almost certainly need your own auth middleware | 19:55 |
notmyname | what dilemma said :-) | 19:55 |
* dilemma is doing exactly that | 19:56 | |
dolph_ | notmyname, write a functional test in keystone.test.functional, open a gerrit review with it along with an issue? | 19:56 |
notmyname | we have many servers that either don't have auth at all or have different auth systems | 19:56 |
dolph_ | can you* | 19:56 |
notmyname | dolph_: ya (FWIW, this is documented in your README int he swift section) | 19:56 |
dolph_ | notmyname, hmm... i see the comment, but i don't know how that's possibly true today | 19:58 |
dolph_ | notmyname, if it is true today, it's a serious bug on either swift or keystone's end | 19:59 |
YorikSar | dilemma, notmyname: For example, Nova requires at least Glance to work with images. Why don't we make Keystone such a default as well? | 19:59 |
notmyname | dolph_: agreed :-) | 19:59 |
*** jfluhmann has quit IRC | 19:59 | |
dolph_ | YorikSar, i think that will happen after keystone is core? | 19:59 |
*** jfluhmann has joined #openstack | 20:00 | |
notmyname | YorikSar: I don't think we can make swift depend on keystone | 20:00 |
creiht | YorikSar: I'm not sure how you can make keystone a hard requirement, because in a lot of places they are going to have their own auth system already | 20:01 |
*** tjikkun has quit IRC | 20:01 | |
creiht | for example, rackspace :) | 20:01 |
dilemma | YorikSar: the point here though is that the middleware lives in the Keystone project because when you use any part of the system without keystone, you don't need the middleware, and when you're using it with Keystone, you have the middleware. | 20:01 |
creiht | dilemma: ++ | 20:01 |
YorikSar | dolph_: Well, yes. So that people should integrate Keystone to external custom auth system, not every service separately | 20:01 |
dolph_ | creiht, keystone will be the adapter between openstack and any existing auth system | 20:01 |
creiht | dolph_: I disagree, keystone is a reference common auth system for openstack | 20:02 |
*** reed has joined #openstack | 20:02 | |
redbo | There's no reason to make swift depend on keystone, but if the swift-keystone middleware wants to live in swift's middleware package, I don't think that'd hurt. | 20:02 |
dolph_ | creiht, it's that too | 20:02 |
notmyname | redbo: I think it makes more sense to stay in keystone. (like swauth) | 20:03 |
dolph_ | notmyname, it doesn't make any sense at all to me to have other project's middleware live inside keystone ;) | 20:04 |
*** dirkx_ has joined #openstack | 20:04 | |
creiht | I know! | 20:04 |
creiht | We should have a common middleware team that handles it! | 20:04 |
creiht | ;P | 20:04 |
dolph_ | notmyname, they have immediate dependencies on those projects (nova auth middleware imports from nova, for example... and only depends on keystone remotely) | 20:04 |
creiht | as its own project with PTL and everything | 20:04 |
creiht | :) | 20:04 |
creiht | dolph_: maybe a better question is, who is responsible for maintaining the middleware? | 20:06 |
dolph_ | creiht, exactly... and with dependencies on those outside projects, it doesn't make sense for it to live in keystone | 20:07 |
*** tjikkun has joined #openstack | 20:07 | |
*** tjikkun has joined #openstack | 20:07 | |
dolph_ | even weirder to me, is that rackspace has proprietary code in the keystone codebase | 20:08 |
dolph_ | ("legacy auth"... which is only "legacy" to rackspace, and completely useless to the rest of the world) | 20:08 |
*** nerdstein has quit IRC | 20:08 | |
notmyname | dolph_: that does sound odd. we solved that with swift/cloud files by having a separate package/codebase for the internal and legacy stuff | 20:09 |
dolph_ | notmyname, that would work here too | 20:10 |
notmyname | "rackswift" | 20:11 |
dolph_ | keystone.middleware.nova_token_auth or whatever should be moved to ~ nova.middleware.keystone | 20:11 |
dolph_ | changes like that would certainly make keystone more intuitive for both new developers and for new operators | 20:12 |
notmyname | dolph_: I don't think we're going to agree on this one :-) | 20:12 |
dolph_ | notmyname, are we disagreeing? | 20:12 |
YorikSar | dilemma: If you have separate Nova, Swift and Keystone nodes, you should not have to install Keystone on each of them just for one middleware file. | 20:12 |
creiht | why not? | 20:12 |
dilemma | I don't have Nova or Keystone at all... | 20:12 |
creiht | apt-get install keystone-middleware | 20:13 |
creiht | :) | 20:13 |
dolph_ | apt-get install nova-middleware-keystone | 20:13 |
creiht | dolph_: so back to my earlier question, who is going to write/maintain this middleware | 20:13 |
YorikSar | So we should package each middleware separately? | 20:13 |
*** jtanner has joined #openstack | 20:13 | |
creiht | including when the keystone featureset changes? | 20:13 |
YorikSar | Will there be separate packages for Quantum integration points in Nova? | 20:14 |
creiht | YorikSar: I would just say one | 20:14 |
dolph_ | YorikSar, either separately or with each independent project, but not with keystone | 20:14 |
*** mattray has quit IRC | 20:14 | |
creiht | like there is a mysql-client package if you don't want the whole db | 20:14 |
dolph_ | creiht, each independent project | 20:14 |
*** manish has quit IRC | 20:14 | |
*** dendro-afk is now known as dendrobates | 20:15 | |
dilemma | that's an excellent point YorikSar. Having the middleware in each project would work from both a dependency and installation perspective | 20:15 |
dolph_ | dilemma, ++ | 20:15 |
*** gnu111 has quit IRC | 20:16 | |
dolph_ | creiht, the middleware is fairly simple in terms of keystone featureset... and i don't think it would change much in the long run, if at all | 20:17 |
YorikSar | And one more thing. If we're going to keep Keystone auth middleware out of Nova we have to either keep existing Nova auth system or let it live without any auth at all. Both possibilities are bad, I guess. | 20:18 |
dolph_ | creiht, in general, all the middleware does is direct unauthenticated requests off to keystone for authentication, and then validating that authentication when the user comes back | 20:18 |
*** Gordonz has quit IRC | 20:18 | |
dilemma | if the outward-facing keystone API is fairly simple, that makes sense. The internal APIs used to write the middleware for each of the systems are likely much more complex. | 20:19 |
dolph_ | YorikSar, i think nova should be configured out of the box with keystone middleware, but if you opt to remove it, there is no longer any authentication against nova | 20:19 |
creiht | dolph_: I know how auth middleware works :) | 20:19 |
dolph_ | creiht, well... i don't. so ha. | 20:19 |
dolph_ | anyway, is anyone against moving auth middleware out of keystone and into each discrete project? it would make sense to me for keystone to provide/support a very simple sample middleware | 20:21 |
*** tjikkun has quit IRC | 20:21 | |
*** jtanner has quit IRC | 20:22 | |
YorikSar | dolph_: But why is Keystone so special? Why Nova always has Glance and Quantum integration modules, but not the Keystone one? I don't see any reason why integration with default complements should be kept out of service's codebase or basic package. | 20:22 |
YorikSar | dolph_: *not has, but will have, of cause | 20:23 |
YorikSar | **"of course", of course | 20:23 |
notmyname | dolph_: opened an issue https://github.com/rackspace/keystone/issues/139 | 20:24 |
dolph_ | notmyname, thanks! | 20:24 |
*** anotherjesse has joined #openstack | 20:25 | |
*** elmo has joined #openstack | 20:28 | |
*** YorikSar has quit IRC | 20:29 | |
*** anotherjesse_ has joined #openstack | 20:30 | |
*** YorikSar has joined #openstack | 20:30 | |
creiht | dolph_: I guess you wore me down enough, I would prefer otherwise, but if it has to that's fine with me | 20:30 |
notmyname | creiht: you aren't even on the swift team anymore! ;-) | 20:31 |
notmyname | you don't get a vote ;-) | 20:31 |
dolph_ | notmyname, rofl | 20:31 |
*** jtran has joined #openstack | 20:31 | |
jtran | hey all. is nova compatible w/ python2.7 now? Or is still supposed to be using 2.6? | 20:32 |
redbo | last time I checked, we were still in swift-core :P | 20:32 |
YorikSar | jtran: I use a 2.7 on my dev box. Had no issues with that. | 20:32 |
notmyname | lol, I know, I know. I'm not trying to steal your birthright :-) | 20:33 |
*** mfer has quit IRC | 20:33 | |
jtran | cool thx | 20:33 |
*** anotherjesse has quit IRC | 20:33 | |
*** anotherjesse_ is now known as anotherjesse | 20:33 | |
*** mfer has joined #openstack | 20:33 | |
*** lorin1 has quit IRC | 20:35 | |
*** ameade has quit IRC | 20:35 | |
*** lpatil has left #openstack | 20:35 | |
notmyname | IMO, things in swift should be related to the core storage system. stubs (like tempauth) or system features (like ratelimit, recon, and cache). that's why things like swauth, slogging (and now origin stuff) are separate projects instead of in swift proper | 20:35 |
*** ameade has joined #openstack | 20:38 | |
*** danishman has joined #openstack | 20:38 | |
*** dragondm has joined #openstack | 20:40 | |
YorikSar | notmyname: It looks like tempauth does something more complex than keystone middleware. So why don't keep the latter in codebase? | 20:47 |
*** mfer has quit IRC | 20:47 | |
*** tjikkun has joined #openstack | 20:50 | |
*** tjikkun has joined #openstack | 20:50 | |
*** Dunkirk has quit IRC | 20:56 | |
creiht | notmyname: lol :) | 20:57 |
*** cp16net has quit IRC | 20:57 | |
*** AhmedSoliman has joined #openstack | 20:58 | |
*** anotherjesse has quit IRC | 20:59 | |
*** anotherjesse has joined #openstack | 20:59 | |
*** bcwaldon has joined #openstack | 21:00 | |
uvirtbot | New bug: #825493 in glance "Glance client requires gettext module" [High,Triaged] https://launchpad.net/bugs/825493 | 21:01 |
notmyname | dolph_: YorikSar: am I supposed to be using swiftauth or tokenauth. the keystone instructions say tokenauth, but swiftauth seems more appropriate (although I haven't been able to load it yet) | 21:01 |
dolph_ | notmyname, ziad says swiftauth | 21:02 |
notmyname | dolph_: hmm...so the docs are out of date. any docs on it? | 21:03 |
notmyname | dolph_: what is keystone_url? | 21:03 |
notmyname | dolph_: and it seems that if any of these config values are missing it raises exceptions | 21:04 |
dolph_ | notmyname, i'm not aware of any other docs | 21:04 |
notmyname | (instead of gracefully failing) | 21:04 |
dolph_ | keystone_url should be either the service or admin api... | 21:05 |
dolph_ | notmyname, where is keystone_url defined? | 21:05 |
notmyname | dolph_: self.keystone_url = urlparse(conf.get('keystone_url')) | 21:05 |
*** ewindisch has joined #openstack | 21:07 | |
notmyname | dolph_: hmm...got the proxy loading with swiftauth, but now I'm only getting 401 responses | 21:07 |
dolph_ | there's an open bug that you have to create an admin token through keystone-manage... did you do that? | 21:08 |
*** FallenPegasus has joined #openstack | 21:08 | |
notmyname | dolph_: ah. got it. keystone_url needed port 5001, not 5000 | 21:08 |
dolph_ | notmyname, ah cool - makes sense | 21:09 |
notmyname | dolph_: perhaps to you ;-) | 21:09 |
notmyname | dolph_: good news! swiftauth doesn't allow access to any account! | 21:09 |
dolph_ | notmyname, YAY! so what's the other middleware for lol | 21:10 |
notmyname | dolph_: no idea | 21:10 |
uvirtbot | New bug: #825489 in openstack-ci "new gerrit build lost documentation links" [High,New] https://launchpad.net/bugs/825489 | 21:10 |
*** FallenPegasus is now known as MarkAtwood | 21:12 | |
*** anotherjesse_ has joined #openstack | 21:14 | |
*** ewindisch has quit IRC | 21:16 | |
*** anotherjesse has quit IRC | 21:17 | |
*** anotherjesse_ is now known as anotherjesse | 21:17 | |
*** lts has quit IRC | 21:18 | |
*** dendrobates is now known as dendro-afk | 21:19 | |
*** MarkAtwood has quit IRC | 21:19 | |
*** msivanes has quit IRC | 21:19 | |
nhm | boo, I found out the C6100s from dell can't allocate all of the drives to a single sled. | 21:21 |
nhm | I was hoping I might be able to do 3 compute nodes + 1 storage node in a chassis. | 21:21 |
*** ejat has joined #openstack | 21:21 | |
*** ejat has joined #openstack | 21:21 | |
*** carlp has joined #openstack | 21:23 | |
*** jdurgin has joined #openstack | 21:27 | |
*** dirkx_ has quit IRC | 21:27 | |
*** rnirmal has quit IRC | 21:28 | |
*** PeteDaGuru has left #openstack | 21:30 | |
*** vernhart has joined #openstack | 21:31 | |
*** kbringard has quit IRC | 21:33 | |
*** jtran has quit IRC | 21:33 | |
*** martine_ has joined #openstack | 21:41 | |
*** imsplitbit has quit IRC | 21:43 | |
*** martine has quit IRC | 21:44 | |
*** grapex has quit IRC | 21:46 | |
*** jdurgin has quit IRC | 21:48 | |
*** carlp has quit IRC | 21:48 | |
*** martine_ has quit IRC | 21:49 | |
*** carlp has joined #openstack | 21:49 | |
*** jdurgin has joined #openstack | 21:50 | |
*** rnorwood has quit IRC | 21:57 | |
*** obino has quit IRC | 21:59 | |
*** obino has joined #openstack | 21:59 | |
*** amccabe has quit IRC | 21:59 | |
*** bsza has quit IRC | 22:02 | |
*** jfluhmann has quit IRC | 22:05 | |
*** MagicFab_ has joined #openstack | 22:06 | |
*** LiamMac has quit IRC | 22:06 | |
*** bcwaldon has quit IRC | 22:09 | |
*** Ryan_Lane has quit IRC | 22:12 | |
*** danishman has quit IRC | 22:15 | |
*** eday has quit IRC | 22:17 | |
*** ncode has quit IRC | 22:20 | |
*** lborda has quit IRC | 22:26 | |
*** ldlework has quit IRC | 22:27 | |
*** eday has joined #openstack | 22:28 | |
*** ChanServ sets mode: +v eday | 22:28 | |
*** ats has joined #openstack | 22:38 | |
ats | I have a queston on the key displayed on openstack swift: st -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U group:user -K displayedkey upload upfiles file2.tgz | 22:40 |
ats | In this command if somebody does "ps" on the machine the key will be visible to them if the file takes longer to upload. Is there any way to hide the key? | 22:41 |
*** rnorwood has joined #openstack | 22:43 | |
*** bsza has joined #openstack | 22:44 | |
*** marrusl has quit IRC | 22:49 | |
*** mattray has joined #openstack | 22:49 | |
*** mattray has quit IRC | 22:49 | |
*** mattray has joined #openstack | 22:49 | |
*** stewart has joined #openstack | 22:49 | |
ats | Actually, I see envirnoment variable called ST_USER in the st script. I didn't try that. Let me try. | 22:50 |
ats | I see setting of ST_KEY environment variable avoid that. So scratch my question. | 22:52 |
*** mattray1 has joined #openstack | 22:52 | |
*** vladimir3p has quit IRC | 22:52 | |
*** mszilagyi_ has joined #openstack | 22:53 | |
*** bsza has quit IRC | 22:54 | |
*** mattray has quit IRC | 22:54 | |
*** mszilagyi has quit IRC | 22:55 | |
*** mszilagyi_ is now known as mszilagyi | 22:55 | |
*** mattray1 has quit IRC | 22:57 | |
*** rnorwood has quit IRC | 22:58 | |
*** ejat has quit IRC | 23:01 | |
*** Ryan_Lane has joined #openstack | 23:03 | |
*** mszilagyi_ has joined #openstack | 23:04 | |
*** freeflying has joined #openstack | 23:07 | |
*** mszilagyi has quit IRC | 23:07 | |
*** mszilagyi_ is now known as mszilagyi | 23:07 | |
*** anotherjesse has quit IRC | 23:07 | |
*** anotherjesse has joined #openstack | 23:07 | |
*** dolph_ has quit IRC | 23:07 | |
*** Ryan_Lane has quit IRC | 23:10 | |
*** AhmedSoliman has quit IRC | 23:21 | |
*** iRTermite has quit IRC | 23:24 | |
*** iRTermite has joined #openstack | 23:25 | |
*** ewindisch has joined #openstack | 23:27 | |
*** fysa has quit IRC | 23:30 | |
*** npmapn has quit IRC | 23:31 | |
*** katkee has quit IRC | 23:36 | |
*** heckj has quit IRC | 23:36 | |
*** huslage has joined #openstack | 23:39 | |
*** cereal_bars has quit IRC | 23:43 | |
*** clauden_ has quit IRC | 23:46 | |
*** jdurgin has quit IRC | 23:46 | |
*** carlp has quit IRC | 23:48 | |
*** carlp has joined #openstack | 23:49 | |
*** tryggvil___ has joined #openstack | 23:52 | |
*** ewindisch has quit IRC | 23:53 | |
*** anotherjesse has quit IRC | 23:54 | |
*** tryggvil has quit IRC | 23:56 | |
*** carlp has quit IRC | 23:56 | |
*** carlp has joined #openstack | 23:56 | |
*** ats has quit IRC | 23:56 | |
*** martine_ has joined #openstack | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!