*** pguth66 has quit IRC | 00:00 | |
*** pguth66 has joined #openstack | 00:00 | |
*** Eyk has quit IRC | 00:01 | |
*** kenh has joined #openstack | 00:02 | |
*** pguth66 has quit IRC | 00:03 | |
*** BK_man has quit IRC | 00:07 | |
*** jeffjapan has joined #openstack | 00:14 | |
*** nelson has quit IRC | 00:14 | |
*** nelson has joined #openstack | 00:15 | |
*** dragondm has quit IRC | 00:19 | |
*** dirkx_ has joined #openstack | 00:19 | |
*** joearnold has quit IRC | 00:27 | |
*** miclorb has quit IRC | 00:35 | |
*** zedas has joined #openstack | 00:36 | |
*** dirkx_ has quit IRC | 00:42 | |
*** patcoll has joined #openstack | 00:47 | |
*** patcoll has quit IRC | 01:03 | |
*** mattray has quit IRC | 01:06 | |
*** grapex has joined #openstack | 01:07 | |
nelson | Hey! | 01:08 |
---|---|---|
nelson | http://swift.openstack.org/howto_installmultinode.html has been updated for swauth! Well done! | 01:08 |
*** miclorb has joined #openstack | 01:11 | |
gholt | nelson: Ah, cool. You all updated now? | 01:12 |
nelson | working on it. I just found howto_installmultinode's auth section. I'll try following it to see if the dox are at all buggy. | 01:13 |
gholt | Gotcha, k. | 01:13 |
*** mattray has joined #openstack | 01:14 | |
*** agarwalla has joined #openstack | 01:24 | |
* nelson needs to write a program which implements howto_installmultinode and Just Does It(tm) | 01:27 | |
*** jdurgin has quit IRC | 01:27 | |
*** mattray has quit IRC | 01:30 | |
nelson | gholt: the h_im instructions say to set up a filter:swauth section in proxy-server.conf, however, proxy-server whinges if it doesn't get a filter:auth section. | 01:31 |
gholt | You probably still have auth in your pipeline, it should be swauth now | 01:31 |
nelson | indeed, yes, I have a custom pipeline, and missed that, thanks. | 01:32 |
*** grapex has left #openstack | 01:33 | |
notmyname | cw: any thoughts on the xfs inode64 mount option? (http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F) | 01:34 |
nelson | Auth subsystem prep failed: 401 Unauthorized | 01:34 |
gholt | anticw: ^^ [in case you don't see the cw] | 01:35 |
gholt | nelson: That is either the wrong -K or allow_account_management not equalling true for that proxy server. | 01:36 |
nelson | don't have allow_account_management true. Looks like I'm running into all of the 1.1 -> 1.3 changes head-on. | 01:37 |
nelson | no, wait, I do. | 01:38 |
gholt | nelson: The reasoning behind that is that devauth was it's own server that could be firewalled separately from the proxy. Now swauth is part of the proxy pipeline, so some like to run a special firewalled proxy with allow_account_management = true and the public proxies with it off. | 01:38 |
nelson | and the -K is definitely correct - cut-n-paste. | 01:38 |
gholt | Hmm. Maybe just tail your log(s), run the prep, and post the logs and see what we see? | 01:38 |
nelson | http://paste.openstack.org/show/1328/ | 01:40 |
gholt | That looks pretty weird, gimme a sec to digest. :) | 01:43 |
gholt | I don't understand why yours is posting at that long path, mine logs POST /auth/v2/.prep | 01:44 |
nelson | swauth-prep -A http://alsted.wikimedia.org/auth/ -K justkidding | 01:44 |
nelson | should there be a trailing slash following "auth"? | 01:45 |
gholt | Yeah, taht should be fine | 01:46 |
gholt | it'll add it if you forget it | 01:46 |
gholt | Looking at the swauth-prep code, it looks like it just does a POST <-A you give>v2/.prep | 01:47 |
nelson | oh.... I'll bet my rewriter is getting in the way. | 01:47 |
gholt | Oh! | 01:47 |
nelson | gimme a sec to take it out. | 01:47 |
gholt | Well, if it add v1/AUTH_53189d113690458f9575e97096180ef3/ that would make perfect sense. :) | 01:48 |
nelson | hehe, it's proceeding apace.... | 01:48 |
nelson | I gotta teach my rewriter about the /auth URL. Should be a one-line fix. | 01:49 |
gholt | I wonder if the auth_prefix option in the swauth filter section would do whatcha want as well. | 01:50 |
gholt | Something like auth_prefix = //v1/AUTH_53189d113690458f9575e97096180ef3/auth%25 hehe | 01:51 |
gholt | Eh, that's wrong, but maybe you get the idea. Changing your rewriter might be safer. | 01:52 |
*** bluetux has quit IRC | 01:53 | |
*** mattray has joined #openstack | 01:53 | |
gholt | I'm just not sure if something else would grab onto the /v1 and not let swauth have it. | 01:53 |
*** mattray has quit IRC | 01:54 | |
gholt | anticw: The main thing we're wondering is if the lack of the xfs inode64 option is what causes the age-slowdown issues we were talking about at the conference. | 01:56 |
nelson | it's not a problem to change the rewriter to just leave any /auth URL alone. | 01:57 |
*** dendrobates is now known as dendro-afk | 02:01 | |
*** dendro-afk is now known as dendrobates | 02:02 | |
*** adjohn has joined #openstack | 02:07 | |
*** _adjohn has joined #openstack | 02:07 | |
*** ike1 has joined #openstack | 02:08 | |
ike1 | i can not connect lan gateway after i install nova | 02:08 |
ike1 | it's show ip address is 192.168.122.1 | 02:09 |
ike1 | but correct ip shoud be: 192.168.2.228 | 02:09 |
*** dprince has joined #openstack | 02:09 | |
*** lvaughn has quit IRC | 02:10 | |
ike1 | i check the /etc/network/interfaces, in this config file, ip address is 192.168.2.228 | 02:10 |
ike1 | i want to restart network, but it show can not read /etc/network/interfaces file | 02:11 |
*** adjohn has quit IRC | 02:11 | |
*** _adjohn is now known as adjohn | 02:11 | |
*** HugoKuo has joined #openstack | 02:13 | |
*** cloudgroups has joined #openstack | 02:13 | |
*** cloudgroups has left #openstack | 02:14 | |
*** freeflyi1g has quit IRC | 02:22 | |
*** dendrobates is now known as dendro-afk | 02:22 | |
*** miclorb has quit IRC | 02:23 | |
*** freeflying has joined #openstack | 02:24 | |
*** obino has quit IRC | 02:30 | |
*** obino has joined #openstack | 02:30 | |
*** Dweezahr has quit IRC | 02:34 | |
*** Dweezahr has joined #openstack | 02:35 | |
HugoKuo | Is there any flag for auto-associate public IP ? | 02:39 |
*** JuanPerez has joined #openstack | 02:46 | |
*** Juan_Perez has joined #openstack | 02:46 | |
*** JuanPerez has quit IRC | 02:46 | |
*** Juan_Perez has quit IRC | 02:46 | |
ike1 | HugoKuo: what's your means? | 02:52 |
ike1 | what is the flag? | 02:52 |
HugoKuo | the flag in nova.conf | 02:53 |
HugoKuo | or parameter | 02:53 |
ike1 | i check it | 02:53 |
ike1 | just yesterday all things is correct, i use halt shutdown the server | 02:53 |
HugoKuo | while an instance runs up , a private IP bind to instacne .... | 02:54 |
ike1 | today when i power on the server, i find i can not use putty ssh to it | 02:54 |
HugoKuo | host or guest instance ? | 02:54 |
ike1 | i just install nova, not run instance yet | 02:54 |
ike1 | host | 02:54 |
ike1 | i have not install image | 02:55 |
ike1 | i plan today install a image for guest instance | 02:55 |
*** s1cz is now known as s1cz- | 02:56 | |
ike1 | i don't understand why server ip auto chance for an other ip address | 02:57 |
ike1 | in nova.conf dhcpbrige = /etc/bin/nova-dhcpbrige | 03:00 |
*** dprince has quit IRC | 03:12 | |
*** AimanA is now known as HouseAway | 03:13 | |
*** zenmatt has quit IRC | 03:13 | |
*** zenmatt_ has joined #openstack | 03:14 | |
*** pLr has quit IRC | 03:16 | |
raggi_ | is there a way to forcibly rebuild novas portion of the iptables rules? | 03:20 |
raggi_ | http://pastie.textmate.org/private/hpkd522cnpzqnagmzuskda | 03:24 |
raggi_ | it's installed rules with /26 instead of /24 | 03:24 |
*** hadrian has quit IRC | 03:26 | |
*** adjohn has quit IRC | 03:29 | |
*** mdomsch has quit IRC | 03:30 | |
*** adjohn has joined #openstack | 03:30 | |
HugoKuo | ikel , the new ip is your instance network's gateway , it'll bind to flat_interface | 03:33 |
ike1 | HugoKuo: i us vi (nano maybe have problem) edit the interfaces files | 03:37 |
ike1 | # The loopback network interface | 03:38 |
ike1 | auto lo | 03:38 |
ike1 | iface lo inet loopback | 03:38 |
ike1 | auto eth0 | 03:38 |
ike1 | iface eth0 inet static | 03:38 |
ike1 | #bridge_ports eth0 | 03:38 |
ike1 | bridge_stp off | 03:38 |
ike1 | bridge_maxwait 0 | 03:38 |
ike1 | bridge_fd 0 | 03:38 |
ike1 | address 192.168.2.228 | 03:38 |
ike1 | netmask 255.255.255.0 | 03:38 |
ike1 | broadcast 192.168.2.255 | 03:38 |
ike1 | gateway 192.168.2.1 | 03:38 |
ike1 | dns-nameservers 202.103.24.68 | 03:38 |
ike1 | now i can connect to the server | 03:38 |
HugoKuo | I guess you rewrite br100 to eth0 , am I right ? | 03:39 |
ike1 | yes | 03:40 |
ike1 | you are right | 03:40 |
ike1 | this will bring some problem? | 03:41 |
ike1 | the server have only one NIC, i use this sever in single node mode | 03:43 |
HugoKuo | ok | 03:44 |
HugoKuo | try to set publice_interface =eth0 and flat_interface = eth0 | 03:44 |
ike1 | in nova.conf? | 03:45 |
HugoKuo | did you install from Script ? | 03:45 |
HugoKuo | yes | 03:45 |
ike1 | yes | 03:46 |
ike1 | i use script install mode. | 03:46 |
ike1 | this is my nova.conf : | 03:46 |
ike1 | ------------- | 03:46 |
ike1 | --dhcpbridge_flagfile=/etc/nova/nova.conf | 03:46 |
ike1 | --dhcpbridge=/usr/bin/nova-dhcpbridge | 03:46 |
ike1 | --logdir=/var/log/nova | 03:46 |
ike1 | --state_path=/var/lib/nova | 03:46 |
ike1 | --lock_path=/var/lock/nova | 03:46 |
ike1 | --verbose | 03:46 |
ike1 | --s3_host=192.168.2.228 | 03:46 |
ike1 | --rabbit_host=192.168.2.228 | 03:46 |
ike1 | --cc_host=192.168.2.228 | 03:46 |
ike1 | --ec2_url=http://192.168.2.228:8773/services/Cloud | 03:46 |
ike1 | --fixed_range=192.168.2.0/12 | 03:46 |
ike1 | --network_size=8 | 03:46 |
ike1 | --FAKE_subdomain=ec2 | 03:46 |
ike1 | --routing_source_ip=192.168.2.228 | 03:46 |
*** rchavik has joined #openstack | 03:47 | |
HugoKuo | ikel | 03:48 |
ike1 | . | 03:48 |
HugoKuo | ikel , plz paster on http://paste.openstack.org/ next time | 03:49 |
ike1 | ok, | 03:49 |
HugoKuo | http://wiki.openstack.org/FlagsGrouping | 03:49 |
ike1 | http://paste.openstack.org/raw/1329/ | 03:51 |
ike1 | can you see what i post ? | 03:52 |
HugoKuo | my fault , | 03:53 |
HugoKuo | --public_interface | 03:53 |
HugoKuo | --flat_interface | 03:53 |
ike1 | sorry, | 03:53 |
HugoKuo | you can check flags group from the link that I post | 03:53 |
ike1 | ok, i read the link | 03:54 |
HugoKuo | There'r too many flags . | 03:54 |
ike1 | yes! | 03:55 |
ike1 | but i not find --public_interface flag in this link? | 03:57 |
ike1 | in "Configuring Flat DHCP Networking" | 04:00 |
ike1 | i find : --public_interface=eth0 | 04:00 |
*** adjohn has quit IRC | 04:00 | |
*** patri0t has quit IRC | 04:04 | |
*** gaveen has joined #openstack | 04:08 | |
*** gaveen has joined #openstack | 04:08 | |
*** miclorb_ has joined #openstack | 04:13 | |
*** adjohn has joined #openstack | 04:20 | |
*** omidhdl has joined #openstack | 04:22 | |
*** maplebed has quit IRC | 04:32 | |
*** santhosh has joined #openstack | 04:37 | |
*** _adjohn has joined #openstack | 04:39 | |
*** santhosh has quit IRC | 04:40 | |
*** santhosh has joined #openstack | 04:40 | |
*** adjohn has quit IRC | 04:42 | |
*** _adjohn is now known as adjohn | 04:42 | |
*** kashyap has joined #openstack | 04:42 | |
*** gaveen has quit IRC | 04:44 | |
*** gaveen has joined #openstack | 04:45 | |
*** f4m8_ is now known as f4m8 | 04:49 | |
*** dysinger has joined #openstack | 04:57 | |
*** santhosh has quit IRC | 04:59 | |
*** santhosh has joined #openstack | 05:01 | |
*** miclorb_ has quit IRC | 05:07 | |
*** hagarth has joined #openstack | 05:08 | |
*** Zangetsue has joined #openstack | 05:09 | |
*** mattray has joined #openstack | 05:16 | |
*** miclorb has joined #openstack | 05:25 | |
*** guynaor has joined #openstack | 05:27 | |
*** guynaor has left #openstack | 05:27 | |
*** arun_ has joined #openstack | 05:30 | |
*** arun_ has joined #openstack | 05:30 | |
*** zenmatt_ has quit IRC | 05:34 | |
*** koji-iida has joined #openstack | 05:34 | |
*** koji-iida has quit IRC | 05:37 | |
*** ccustine has quit IRC | 05:39 | |
*** ike1 has quit IRC | 05:41 | |
*** ike1 has joined #openstack | 05:48 | |
*** kashyap has quit IRC | 05:49 | |
*** kashyap has joined #openstack | 06:06 | |
*** hansin has joined #openstack | 06:06 | |
*** dysinger has quit IRC | 06:07 | |
*** hansin has quit IRC | 06:08 | |
*** hansin has joined #openstack | 06:10 | |
*** hansin has quit IRC | 06:15 | |
*** infinite-scale has joined #openstack | 06:19 | |
ike1 | Unable to run euca-describe-images. Is euca2ools environment set up? | 06:21 |
*** omidhdl has left #openstack | 06:22 | |
*** guigui has joined #openstack | 06:23 | |
*** sebastianstadil has quit IRC | 06:23 | |
*** miclorb has quit IRC | 06:25 | |
*** dobber has joined #openstack | 06:36 | |
*** nerens has joined #openstack | 06:40 | |
*** keds has joined #openstack | 06:41 | |
*** obino has quit IRC | 06:43 | |
*** dysinger has joined #openstack | 06:46 | |
*** omidhdl has joined #openstack | 06:47 | |
*** keds has quit IRC | 06:51 | |
*** miclorb_ has joined #openstack | 06:54 | |
*** toluene has quit IRC | 06:57 | |
*** obino has joined #openstack | 06:58 | |
*** toluene has joined #openstack | 06:59 | |
*** med_out is now known as medberry | 07:00 | |
*** hggdh has joined #openstack | 07:08 | |
*** mattray has quit IRC | 07:10 | |
*** katkee has joined #openstack | 07:11 | |
*** Kronick has joined #openstack | 07:12 | |
*** katkee has quit IRC | 07:12 | |
*** lionel has quit IRC | 07:16 | |
*** obino has quit IRC | 07:16 | |
*** obino has joined #openstack | 07:17 | |
*** obino has quit IRC | 07:18 | |
*** rcc has joined #openstack | 07:18 | |
*** hggdh has quit IRC | 07:19 | |
*** CloudChris has joined #openstack | 07:20 | |
*** CloudChris has left #openstack | 07:20 | |
*** lionel has joined #openstack | 07:21 | |
*** ChameleonSys has quit IRC | 07:26 | |
*** ChameleonSys has joined #openstack | 07:27 | |
*** nacx has joined #openstack | 07:31 | |
*** CloudChris has joined #openstack | 07:41 | |
*** CloudChris has left #openstack | 07:41 | |
*** openstackjenkins has quit IRC | 07:44 | |
*** openstackjenkins has joined #openstack | 07:46 | |
*** zul has joined #openstack | 07:48 | |
*** daveiw has joined #openstack | 07:53 | |
*** dirkx_ has joined #openstack | 08:05 | |
*** dendro-afk is now known as dendrobates | 08:06 | |
*** dirkx_ has quit IRC | 08:07 | |
*** openstackjenkins has quit IRC | 08:09 | |
*** openstackjenkins has joined #openstack | 08:09 | |
*** miclorb_ has quit IRC | 08:13 | |
*** xavicampa has joined #openstack | 08:15 | |
*** dirkx_ has joined #openstack | 08:16 | |
*** katkee has joined #openstack | 08:18 | |
*** lborda has joined #openstack | 08:20 | |
*** CloudChris has joined #openstack | 08:22 | |
*** hggdh has joined #openstack | 08:23 | |
*** dirkx_ has quit IRC | 08:23 | |
*** katkee has quit IRC | 08:24 | |
*** toluene has quit IRC | 08:27 | |
*** toluene has joined #openstack | 08:29 | |
*** katkee has joined #openstack | 08:32 | |
*** dirkx_ has joined #openstack | 08:34 | |
*** CloudChris has left #openstack | 08:36 | |
*** dirkx__ has joined #openstack | 08:37 | |
*** dirkx_ has quit IRC | 08:37 | |
*** hggdh has quit IRC | 08:38 | |
*** miclorb has joined #openstack | 08:42 | |
*** kashyap has quit IRC | 08:42 | |
*** jeffjapan has quit IRC | 08:44 | |
*** dirkx__ has quit IRC | 08:45 | |
*** dendrobates is now known as dendro-afk | 08:45 | |
*** dirkx_ has joined #openstack | 08:46 | |
*** guynaor has joined #openstack | 08:47 | |
*** dirkx_ has quit IRC | 08:48 | |
*** MarkAtwood has quit IRC | 08:48 | |
katkee | vishy: hello, your script works with maverick64 inside virtualbox | 08:50 |
*** lborda has quit IRC | 08:50 | |
*** zul has quit IRC | 08:51 | |
katkee | vishy: we are going to try your script on a bare server, do you think your script will work with ubuntu server 11.04 ? | 08:51 |
*** ike1 has quit IRC | 08:51 | |
*** citral has joined #openstack | 08:56 | |
*** lborda has joined #openstack | 08:59 | |
*** CloudChris has joined #openstack | 09:06 | |
*** watcher has joined #openstack | 09:11 | |
*** dirkx_ has joined #openstack | 09:11 | |
*** Kronick has quit IRC | 09:12 | |
*** Kronick has joined #openstack | 09:15 | |
*** zul has joined #openstack | 09:16 | |
*** Kronick has left #openstack | 09:18 | |
*** adjohn has quit IRC | 09:19 | |
*** katkee has quit IRC | 09:23 | |
*** miclorb has quit IRC | 09:23 | |
radek | hi I've setup testing installation of openstack catus on one server everything seems to work fine except that virtual machines show only 100MB nic speed | 09:25 |
radek | is that normal | 09:25 |
radek | i'm using vlan mode | 09:25 |
*** rchavik has quit IRC | 09:26 | |
*** zul has quit IRC | 09:29 | |
*** CloudChris has quit IRC | 09:29 | |
*** CloudChris has joined #openstack | 09:29 | |
*** dirkx__ has joined #openstack | 09:29 | |
dsockwell | have you double-checked the interface properties of your server and switch? | 09:29 |
*** dirkx_ has quit IRC | 09:32 | |
radek | eth0 is running on 1G | 09:34 |
radek | on br100 I can't see speed information | 09:35 |
radek | is it something that i have to configure on bridge interface ? | 09:36 |
*** cuzoka has joined #openstack | 09:39 | |
*** lborda has quit IRC | 09:40 | |
*** watcher has quit IRC | 09:40 | |
mtaylor | soren: hey! where you at? | 09:44 |
*** kashyap has joined #openstack | 09:46 | |
*** lborda has joined #openstack | 09:47 | |
*** nerens has quit IRC | 09:47 | |
*** nerens has joined #openstack | 09:50 | |
*** perestrelka has quit IRC | 09:50 | |
*** lborda has quit IRC | 09:51 | |
radek | dsockweel any ideas ? | 09:54 |
*** rchavik has joined #openstack | 09:54 | |
*** rchavik has joined #openstack | 09:54 | |
*** Binbin is now known as Binbinaway | 09:54 | |
*** dysinger has quit IRC | 09:59 | |
toluene | my god the QEMU have driven me to crazy. I got "internal error Process exited while reading console log output: chardev opening backed "file" failed". Does anyone knows how to fix it ? | 10:00 |
*** dirkx__ has quit IRC | 10:01 | |
*** Eyk has joined #openstack | 10:04 | |
*** smaresca has quit IRC | 10:06 | |
*** daedalusflew has quit IRC | 10:06 | |
*** s1cz- has quit IRC | 10:06 | |
soren | mtaylor: Jokai. | 10:06 |
soren | toluene: Which version of libvirt? | 10:07 |
mtaylor | soren: ah, I'm in Arany - I was going to poke you about a rabbitmq problem I was having | 10:07 |
mtaylor | soren: it seems that I cannot install rabbitmq on maverick cloud servers :( | 10:08 |
*** daedalusflew has joined #openstack | 10:08 | |
soren | mtaylor: Poke away, but I may be slow to respond. | 10:08 |
mtaylor | soren: well, for now I just changed to using lucid and we'll see if that fixes it | 10:08 |
toluene | soren, my libvirt is 0.8.3 | 10:09 |
soren | toluene: Stock? | 10:09 |
soren | toluene: Some versions of libvirt had problems with file backed chardevs. | 10:10 |
toluene | soren, I cd into the instance dir, and run virsh create libvirt.xml, it told me "inter nal error Process exited while reading console log output: chardev: opening backed "file" failed" | 10:11 |
soren | toluene: I understand. | 10:13 |
soren | toluene: I'm asking about libvirt versions. | 10:13 |
soren | toluene: Is it a stock 0.8.3 libvirt? | 10:13 |
toluene | soren, I apt-get installed it from the official site | 10:14 |
soren | the official site? | 10:14 |
soren | Ubuntu? | 10:14 |
soren | Debian? | 10:14 |
soren | Not libvirt, surely. | 10:14 |
toluene | soren, I am using ubuntu 10.10 | 10:15 |
*** Dweezahr has quit IRC | 10:15 | |
*** Dweezahr has joined #openstack | 10:16 | |
soren | toluene: And you're using the libvirt from Ubuntu itself? | 10:18 |
soren | toluene: How are you installing Nova? | 10:18 |
toluene | soren I install it from the source | 10:18 |
soren | toluene: Ok. You need a newer libvirt. | 10:19 |
*** smaresca has joined #openstack | 10:19 | |
*** dendro-afk is now known as dendrobates | 10:25 | |
mtaylor | soren: GGAAAAHHHHHHHHHHHH | 10:25 |
mtaylor | soren: AAAAAARRRRRRRRRGGGGGGGGGGGGGHHHHHHHHHHHHHHH | 10:25 |
*** hggdh has joined #openstack | 10:26 | |
*** ChameleonSys has quit IRC | 10:30 | |
*** CloudChris has left #openstack | 10:30 | |
*** s1cz- has joined #openstack | 10:31 | |
soren | mtaylor: No, tell me how you really feel. Don't hold back. | 10:32 |
mtaylor | soren: apt-get install rabbitmq-server should not, in my opinion, hang | 10:32 |
mtaylor | soren: or fail | 10:32 |
soren | mtaylor: Sounds like a reasonably to opinion to hold. | 10:32 |
soren | mtaylor: Reality holds a different opinion? | 10:33 |
mtaylor | soren: well, spinning a cloud server and trying that command so far has been quite fail | 10:33 |
soren | mtaylor: Which image id? | 10:33 |
mtaylor | soren: 69 definitely breaks - trying 49 now | 10:34 |
mtaylor | soren: (I think rackspace may be rate limiting me at the moment- spinning up servers is getting slow) | 10:34 |
soren | mtaylor: flavour? | 10:34 |
mtaylor | soren: 4 | 10:35 |
soren | Oh. | 10:35 |
mtaylor | soren: problem? | 10:35 |
soren | No, just that I was starting a flavour 1 one. | 10:35 |
mtaylor | soren: it _shouldn't_ make a difference... | 10:35 |
mtaylor | soren: is a 1 big enough to run nova smoketests? | 10:36 |
*** katkee has joined #openstack | 10:36 | |
soren | mtaylor: Not sure. rabbitmq install worked great on the smaller one. Trying the larger on. | 10:37 |
mtaylor | soren: ok. I'll try the smaller one | 10:37 |
soren | mtaylor: ...worked fine on the larger one, too. | 10:37 |
soren | mtaylor: No problems at all. | 10:38 |
mtaylor | soren: goddammit | 10:38 |
mtaylor | soren: I hate everyone | 10:38 |
* soren hugs mtaylor | 10:38 | |
soren | mtaylor: Where are you at? I'm available for a bit of hacking on it. | 10:38 |
mtaylor | soren: Arany - I can leave and meet you somewhere though | 10:39 |
zigo-_- | soren: Hello ! | 10:40 |
soren | mtaylor: Exit Arany. Turn left. Walk 10 feet. | 10:40 |
soren | mtaylor: Meet you there. | 10:40 |
zigo-_- | I just saw a presentation you did of Openstack, apparently in a French univercity. | 10:40 |
zigo-_- | I was wondering if it was available, as I want to do a presentation here in China. | 10:41 |
toluene | soren, problem solved. I forgot to add the user and group permission for qemu.conf | 10:42 |
zigo-_- | soren: Do you have the PDF / PPT / ODP somewhere? | 10:42 |
zigo-_- | I'm not sure I'll do as good as you did, but I'll try... | 10:45 |
*** cm6051 has joined #openstack | 10:46 | |
*** goatrider has joined #openstack | 10:46 | |
*** hggdh has quit IRC | 10:49 | |
*** Eyk has quit IRC | 10:50 | |
*** ChameleonSys has joined #openstack | 10:51 | |
*** dysinger has joined #openstack | 10:53 | |
*** cloudgroups has joined #openstack | 10:55 | |
*** dendrobates is now known as dendro-afk | 10:57 | |
*** cloudgroups has left #openstack | 11:00 | |
*** mgoldmann has joined #openstack | 11:00 | |
*** ctennis has quit IRC | 11:03 | |
*** markvoelker has joined #openstack | 11:04 | |
dobber | hi, i'm having trouble with swift documentation 3.3.1. - installing proxy node | 11:13 |
dobber | the rebalance the rings commands does not work | 11:13 |
dobber | http://docs.openstack.org/cactus/openstack-object-storage/admin/content/installing-and-configuring-the-proxy-node.html | 11:13 |
dobber | in point 7 i configure some storage devices, so i guess stuff should be rebalanced between them, but i never installed storage devices | 11:14 |
dobber | it didnt say anywhere how or when to install them | 11:14 |
*** omidhdl has left #openstack | 11:19 | |
*** ctennis has joined #openstack | 11:19 | |
*** ctennis has joined #openstack | 11:19 | |
*** dirkx_ has joined #openstack | 11:20 | |
*** toluene has quit IRC | 11:23 | |
*** katkee has quit IRC | 11:24 | |
*** mgoldmann_ has joined #openstack | 11:26 | |
*** mgoldmann has quit IRC | 11:26 | |
*** goatrider has quit IRC | 11:28 | |
*** mgoldmann_ has quit IRC | 11:30 | |
zigo-_- | dobber: There USED to be a swift-auth, and it's now gone. | 11:37 |
dobber | zigo-_-: it's in 3.3.2. documentation | 11:37 |
dobber | http://docs.openstack.org/cactus/openstack-object-storage/admin/content/installing-and-configuring-auth-nodes.html | 11:37 |
dobber | do i just skip it ? | 11:37 |
zigo-_- | Should I just repeat? | 11:38 |
zigo-_- | :) | 11:38 |
zigo-_- | The doc is outdated, that's it. | 11:38 |
dobber | i have no idea what swift-auth was, what it used to do and what replaces it. | 11:38 |
dobber | ok moving ahead | 11:39 |
zigo-_- | I /think/ it's now included in swift-proxy-server... | 11:39 |
zigo-_- | Not sure. | 11:39 |
zigo-_- | Hum... it does, because in proxy-server.conf there's the swauth statements of that auth-server.conf ... | 11:40 |
zigo-_- | Hint: use the wiki howto, not the doc, as the wiki is updated. | 11:40 |
dobber | ok | 11:43 |
dobber | http://swift.openstack.org/howto_installmultinode.html | 11:43 |
dobber | this one ? | 11:43 |
zigo-_- | yup | 11:45 |
*** gaveen has quit IRC | 11:46 | |
dobber | kk going over again | 11:46 |
*** Eyk has joined #openstack | 11:47 | |
*** lurkaboo has quit IRC | 11:54 | |
*** dendro-afk is now known as dendrobates | 12:06 | |
*** jness has left #openstack | 12:13 | |
*** aloga has joined #openstack | 12:15 | |
*** zul has joined #openstack | 12:20 | |
*** perestrelka has joined #openstack | 12:24 | |
*** j05h has quit IRC | 12:26 | |
*** zul has quit IRC | 12:35 | |
*** guynaor has left #openstack | 12:41 | |
*** dendrobates is now known as dendro-afk | 12:46 | |
*** RoAkSoAx has quit IRC | 12:52 | |
*** Binbinaway has quit IRC | 12:52 | |
*** j05h has joined #openstack | 12:52 | |
*** dprince has joined #openstack | 12:52 | |
*** hggdh has joined #openstack | 12:53 | |
*** Binbinaway has joined #openstack | 12:56 | |
*** zenmatt has joined #openstack | 12:58 | |
*** openstackjenkins has quit IRC | 13:00 | |
*** aliguori has joined #openstack | 13:00 | |
*** openstackjenkins has joined #openstack | 13:00 | |
*** aliguori has quit IRC | 13:00 | |
*** aliguori has joined #openstack | 13:01 | |
*** NashTrash has joined #openstack | 13:01 | |
NashTrash | Good morning Openstack'ers | 13:01 |
NashTrash | Anyone available for a Swift question? | 13:01 |
alekibango | NashTrash: try it | 13:02 |
*** hadrian has joined #openstack | 13:02 | |
alekibango | NashTrash: reading this might improve your chances :) http://catb.org/~esr/faqs/smart-questions.html | 13:02 |
NashTrash | Thank, I think. Hope I am not coming across as rude. | 13:04 |
alekibango | :) | 13:04 |
alekibango | np nash | 13:04 |
alekibango | just ask | 13:04 |
NashTrash | I have my Swift cluster up and running but want to add a second proxy for HA. | 13:05 |
*** lorin1 has joined #openstack | 13:05 | |
NashTrash | I added it according to the directions here: http://swift.openstack.org/howto_installmultinode.html. | 13:05 |
NashTrash | I then put a Pound load balancer out front with a virtual IP that redirects the requests to both proxy machines. | 13:06 |
*** mgoldmann has joined #openstack | 13:06 | |
NashTrash | I get an error when I try the following "swauth-prep -A https://99.99.99.94:8080/auth/ -K XXXXXX" where the 99. IP is the virtual IP assigned to the LB. | 13:07 |
NashTrash | The error is: Auth subsystem prep failed: 500 Internal Server Error | 13:07 |
alekibango | NashTrash: what OS, swift version? | 13:08 |
NashTrash | Ubuntu 10.10 with the Swift pulled from ppa:swift-core/trunk yesterday | 13:08 |
*** lele_ has joined #openstack | 13:09 | |
alekibango | NashTrash: we have similar problems... lol | 13:09 |
NashTrash | Ha | 13:09 |
NashTrash | Not forever alone | 13:09 |
lele_ | Hi all ! | 13:09 |
NashTrash | hola | 13:09 |
*** dendro-afk is now known as dendrobates | 13:09 | |
alekibango | zigo-_-, dobber you are not alone ^^ | 13:09 |
lele_ | Hola :) , tengo una consulta rapida, hay alguna manera de borrar instancias de compute en el controller sin hacerle un delete al row del mysql ? | 13:10 |
NashTrash | If I try the swauth-prep command directly to either of the proxy machines it works just fine. | 13:10 |
dobber | NashTrash: same problem here | 13:10 |
*** Kronick has joined #openstack | 13:10 | |
*** hagarth has quit IRC | 13:11 | |
NashTrash | Hmm...Does it matter that the load balancer is translating the request from HTTPS to HTTP? | 13:11 |
alekibango | well,. you need https | 13:11 |
NashTrash | Is there a way to set the proxy machines to take HTTP since we are offloading that workload to the LB? | 13:11 |
alekibango | i thing that they will run in http mode authomatically when there are no certificates | 13:13 |
alekibango | imho | 13:13 |
alekibango | you need to check docs or source | 13:13 |
lele_ | Hi all, is there a way to delete compute instances "cleanly" instead of deleting the row on the controller mysql database ? | 13:13 |
alekibango | (i dont know more) | 13:13 |
alekibango | lele_: ... how is that unclean? | 13:14 |
NashTrash | alekibango: I will take a look at that. I am also seeing if Pound can just pass through the HTTPS and not change it to HTTP | 13:14 |
NashTrash | dobber: Are you using Pound as the LB also? | 13:14 |
dobber | No | 13:15 |
alekibango | NashTrash: no he just is running swift for first time | 13:15 |
alekibango | today | 13:15 |
NashTrash | alekibango: That was me yesterday. | 13:15 |
alekibango | NashTrash: can you please do ls -l /etc/swift | 13:15 |
alekibango | and paste it somewhere? | 13:15 |
alekibango | (on machine with proxy... and possibly also on storage node) | 13:16 |
alekibango | i think i have few files there which are not needed anymore | 13:16 |
alekibango | but i am not sure | 13:16 |
alekibango | do you have auth.db there? | 13:16 |
*** galthaus has joined #openstack | 13:16 | |
NashTrash | lele_: Do you mean something other than euca-terminate-instances? | 13:16 |
NashTrash | alekibango: http://paste.openstack.org/show/1332/ | 13:17 |
alekibango | NashTrash: ty | 13:17 |
lele_ | alekibango: i tought that maybe was command like "nova-manage ... something" ... :) but i flipped over the docs an "deleting a compute node" is not documented | 13:17 |
alekibango | dobber: ^^ | 13:17 |
NashTrash | alekibango: auth.db is not needed anymore. | 13:17 |
dobber | alekibango: same as me | 13:17 |
alekibango | aha, lele_ you are asking how to terminate it? | 13:17 |
alekibango | like end user command? | 13:18 |
*** cuzoka has quit IRC | 13:18 | |
alekibango | lele_: euca-terminate-instances | 13:18 |
NashTrash | alekibango: Are you following the directions at docs.openstack.org or http://swift.openstack.org/howto_installmultinode.html? | 13:18 |
*** patcoll has joined #openstack | 13:18 | |
lele_ | alekibango: thats for instances, i was asking for removing a compute node , on a multinode environment, from the controller database / everywhere | 13:18 |
alekibango | lele_: or using openstack tools: nova delete | 13:18 |
alekibango | ah node.... | 13:19 |
alekibango | lele_: sorry i didnt understood you | 13:19 |
lele_ | alekibango: no prob :) | 13:19 |
alekibango | nova-manage service | 13:19 |
alekibango | that might be waht you want | 13:19 |
alekibango | disable it | 13:19 |
lele_ | yep, ive already done that, just wondering if was a way to completely remove it, but i guess that the only way is to delete the row on teh controller mysql database | 13:20 |
NashTrash | lele_: AFAIK the only way to completely remove any of the services is to delete the row in the database. Otherwise you can only disable it. | 13:20 |
notmyname | NashTrash: comment out the certificate stuff in the proxy configs and the proxy servers will use http | 13:21 |
alekibango | lele_: i dont know about other options too | 13:21 |
notmyname | at rackspace we use the LBs for SSL termination, too | 13:21 |
NashTrash | notmyname: Cool. Here we go. | 13:21 |
dobber | NashTrash: i'm using the second documentation. the first one aparently is out of date | 13:22 |
lele_ | NashTrash: yep, thats what i thought , cause i cloned a compute instance, and started on a new xcp node with a new hostname, the compute gets registered on the controller but when i do a nova-manage service list, i see the new compute node with XXX , not happy faces :( | 13:22 |
alekibango | notmyname: is swift able to run on only one server? | 13:24 |
NashTrash | notmyname: Bingo. I am good to go. Thanks. I am quite HA now. I am adding in more disks today (one more on each storage node). I have not seen directions for adding disks. Do you know of any? | 13:24 |
alekibango | i mean only one device with 1 copy style | 13:24 |
*** galthaus has quit IRC | 13:24 | |
alekibango | NashTrash: add them like when you started | 13:24 |
alekibango | rebalance | 13:24 |
*** skiold has joined #openstack | 13:24 | |
alekibango | and you are in | 13:24 |
NashTrash | lele_: Yeah. I just have a MySQL client that I use to directly edit the tables. Much easier. | 13:25 |
NashTrash | alekibango: Thanks. | 13:25 |
*** santhosh has quit IRC | 13:25 | |
alekibango | NashTrash: at least thats my understanding :) | 13:25 |
*** watcher has joined #openstack | 13:26 | |
lele_ | NashTrash: i think ill go for that too, any thoughts about why my new cloned node (with the same working config) is not showing as "happy" on the controller, the api ports are reacheable and connectivity is ok ... | 13:26 |
*** santhosh has joined #openstack | 13:26 | |
NashTrash | lele_: Time synchronization is important and not well documented. Make sure you have NTP installed on all nodes. | 13:26 |
*** lvaughn has joined #openstack | 13:27 | |
lele_ | NashTrash: thats a cool tip, documentation is a big issue on this, every paper seem to get everything working easily and "magically" :) | 13:28 |
alekibango | ntp is a must for every server | 13:28 |
alekibango | but for cloud, its a MUST | 13:28 |
dobber | lele_: i'm seeing the same problem too. magic is not good for my production :( | 13:29 |
alekibango | dobber: thats why they call it cloud | 13:29 |
dobber | not to mention, outdated get started tutorial | 13:29 |
alekibango | under the fluffy cloud there is lots of magic | 13:29 |
NashTrash | lele_: Documentation is key. OpenStack has a lot of moving parts so it must be crazy trying to keep it all well documented. Especially with most users living on /trunk. | 13:29 |
*** Zangetsue has quit IRC | 13:29 | |
dobber | rsync config with some tricks is not that hard to explain and understand | 13:29 |
lele_ | yep, weŕe currently managing our cloud with oracleVM and enterprise manager , has a lot of black magic, but getting openstack running first time was a hard one | 13:29 |
alekibango | lele_: life is pain | 13:30 |
NashTrash | Yeah. Two plus weeks before we had a stable pilot cloud. Swift only took a day. | 13:30 |
*** zenmatt has quit IRC | 13:30 | |
lele_ | weŕe looking forward to build an hybrid cloud , integrating our amazon ec2 instances and our production environment over oracle vm, i get everything working with chef and the ec2 xenapi, but this issue was hard to detect, the logs not showing anything about syncro | 13:32 |
dobber | through the magic of strace | 13:32 |
*** Zangetsue has joined #openstack | 13:32 | |
dobber | i found where my problem was | 13:32 |
alekibango | dobber: ? | 13:32 |
dobber | alekibango: one of my storage nodes did not have the right permissions | 13:32 |
*** santhosh has quit IRC | 13:32 | |
dobber | chown swift.swift /srv/node -R | 13:32 |
NashTrash | dobber: d'oh | 13:32 |
dobber | now swauth-prep works | 13:32 |
*** santhosh has joined #openstack | 13:33 | |
lele_ | anyone knows if the chef fog extension for XEN is actually available ? | 13:33 |
alekibango | ok, can you add user also? | 13:34 |
dobber | i added a user | 13:34 |
lele_ | oracleVM has a Xen background, if i can get this extension i hope that interacting with our prod cloud will be easy | 13:34 |
alekibango | dobber: how many nodes you have? | 13:35 |
dobber | 1 proxy, 3 storage | 13:35 |
dobber | all VMs | 13:35 |
alekibango | i am starting to think that zigo's problem is having only 1 node | 13:35 |
alekibango | (even with 1 copy configured) | 13:35 |
alekibango | is there anyone running swift with one device in one zone on one server ? | 13:36 |
dobber | i think with the right config, the setup with one node is possible | 13:36 |
alekibango | (with single copy)? | 13:36 |
*** adjohn has joined #openstack | 13:36 | |
alekibango | dobber: can you try for us please? | 13:36 |
dobber | but it's spagetti | 13:36 |
alekibango | please backup your config on server + nodes | 13:36 |
alekibango | maybe even publish it somewhere plz | 13:36 |
*** iammartian has joined #openstack | 13:36 | |
alekibango | but i think zigo's problem is not in config | 13:36 |
dobber | ok, i'll see if I have some power left tonight | 13:37 |
alekibango | it looks like somthing external | 13:37 |
*** iammartian has left #openstack | 13:37 | |
alekibango | dobber: :) | 13:37 |
alekibango | it might be some package dependency | 13:37 |
*** dendrobates is now known as dendro-afk | 13:38 | |
*** Zangetsue has quit IRC | 13:38 | |
dobber | does he have the same error as me ? | 13:39 |
dobber | what I did was | 13:39 |
dobber | edit proxy-server.conf | 13:39 |
alekibango | dobber: imho the same or similar | 13:39 |
dobber | change workers=1 | 13:39 |
dobber | restart proxy | 13:39 |
dobber | strace -o /tmp/file -s 255 -f -p PID_OF_PROXY | 13:39 |
alekibango | will try | 13:39 |
dobber | and on the other console - swauth-prep whatever | 13:39 |
alekibango | i am on his comp now | 13:39 |
alekibango | prep now works... its the adduser which has problems | 13:40 |
*** kennethkalmer has joined #openstack | 13:41 | |
dobber | also | 13:41 |
alekibango | yes good idea | 13:41 |
dobber | it will try to connect to every node | 13:41 |
dobber | and send stuff like "poll(fd=ID whaeve... | 13:42 |
*** aloga has quit IRC | 13:42 | |
dobber | recv(ID, HTTP/1.1 202 Accepted\r\nContent-Type | 13:42 |
dobber | or recv(13, "HTTP/1.1 500 Internal Server Error\r\n | 13:42 |
dobber | so you know witch node is the problem | 13:42 |
NashTrash | Has anyone tried using "st" from an OSX client machine? | 13:43 |
alekibango | dobber: he has only 1 node | 13:43 |
alekibango | i guess this might be source of the problem | 13:43 |
alekibango | one node, one everything | 13:43 |
dobber | alekibango: he has tree nodes with one ip | 13:43 |
dobber | with the same ip i mean | 13:43 |
alekibango | imho he has only 1 if he didnt make them last night | 13:43 |
dobber | 1 storage ? | 13:43 |
alekibango | yes just 1 | 13:44 |
alekibango | thats why i ask -- is it possible to have only 1 ?? | 13:44 |
dobber | swift-ring-builder account.builder create 18 3 1 <- i guess this should be changed | 13:44 |
dobber | 18 1 1 ? | 13:44 |
alekibango | yes | 13:44 |
alekibango | 18 1 1 | 13:44 |
dobber | well i don't know then... | 13:44 |
dobber | so it works until swauth-add-user ? | 13:45 |
alekibango | yes | 13:45 |
alekibango | prep works | 13:45 |
dobber | so cool | 13:45 |
alekibango | will post it | 13:45 |
alekibango | wait... | 13:45 |
alekibango | i was even able to create account | 13:47 |
*** f4m8 is now known as f4m8_ | 13:48 | |
dobber | magicaly working now ? | 13:48 |
alekibango | ... now.. i still dig in strace output :) | 13:49 |
lele_ | a ton of black magic out there :P | 13:50 |
*** dprince_ has joined #openstack | 13:50 | |
dobber | i'm gonna create a magicstack.org cloud software :) | 13:50 |
alekibango | lol | 13:50 |
lele_ | dobber: jajaja LOL | 13:50 |
dobber | using inotify hooks and rsync :) | 13:51 |
*** dprince has quit IRC | 13:51 | |
*** dprince_ has quit IRC | 13:51 | |
*** zul has joined #openstack | 13:51 | |
*** dprince has joined #openstack | 13:51 | |
*** iammartian has joined #openstack | 13:51 | |
alekibango | dobber: i would use that lol | 13:51 |
*** iammartian has left #openstack | 13:51 | |
alekibango | simple, working.. lovely | 13:52 |
alekibango | and even might be accessible via filesystem hierarchy | 13:52 |
dobber | btw, i it is probably a fashion thing. glusterfs has poor documentation, moosefs has poor documentation | 13:52 |
alekibango | glusterfs is good | 13:52 |
alekibango | but somewhat slow | 13:53 |
dobber | except when you restart a server | 13:53 |
dobber | it's much faster then moose | 13:53 |
alekibango | dobber: i for one am waiting for ceph | 13:53 |
dobber | gluster gets faster by the node | 13:53 |
dobber | moose gets slower by the node | 13:53 |
dobber | go figure... | 13:53 |
alekibango | you mean by node added? | 13:54 |
dobber | yeah, sorry | 13:54 |
alekibango | i like gluster idea | 13:54 |
alekibango | simple, clean | 13:54 |
dobber | the problem with gluster is the missing metadata | 13:55 |
alekibango | dobber: no, its a feature! | 13:55 |
*** Kronick has left #openstack | 13:55 | |
dobber | not when you get out of sync | 13:55 |
alekibango | dobber: it does sync *MAGICALLY* | 13:55 |
alekibango | lol | 13:55 |
dobber | yea, if you have 5 billion files on a node | 13:55 |
dobber | no magic can help you :) | 13:55 |
alekibango | hehe | 13:56 |
alekibango | my guts are saying ceph... but not yet | 13:56 |
dobber | will see | 13:56 |
dobber | my swift is working perfectly i think | 13:57 |
dobber | on to stress testing ... :) | 13:57 |
alekibango | swift looks great -- but not if you need filesystem | 13:57 |
lele_ | welll, i manage to start an instance , but is failing with this error : StorageRepositoryNotFound: Cannot find SR to read/write VDI | 13:57 |
lele_ | and the repo is well defined on the XCP | 13:57 |
dobber | i need a filesystem yes, but if its so good, we can change our application | 13:57 |
dobber | beer time | 13:58 |
NashTrash | dobber: We are currently evaluating gluster, ceph, and moose too. I really hope that ceph gets good quickly | 13:58 |
*** hggdh has quit IRC | 13:58 | |
*** santhosh has quit IRC | 13:59 | |
dobber | gluster is the best *working* thing i've seen lately | 14:00 |
dobber | at least for my use | 14:00 |
*** zul has quit IRC | 14:01 | |
dobber | but the consistency problem (that has been around forever) is not good | 14:01 |
NashTrash | dobber: Our apps NEED consistency. I am really hoping an open source solution works for us because the Isilon's of the world are wildly expensive. | 14:03 |
*** sunny has joined #openstack | 14:05 | |
*** msivanes has joined #openstack | 14:05 | |
*** katkee has joined #openstack | 14:06 | |
dobber | NashTrash: my opinion is that the sky is too cloudy for now ;( | 14:07 |
dobber | at least for my use | 14:07 |
dobber | but we are growing and need to change and plan. so I have to find a sollution | 14:07 |
NashTrash | dobber: :-) We shall see. | 14:07 |
*** amccabe has joined #openstack | 14:10 | |
lele_ | guys, anyone experienced with xen integration ? | 14:10 |
alekibango | not me (kvm) | 14:10 |
lele_ | it seems like the compute is not seeing the SR storage on the XCP | 14:10 |
*** prudhvi has joined #openstack | 14:11 | |
lele_ | i know that KVM is very supported, any relevant diferences over XCP ? | 14:11 |
*** openpercept_ has joined #openstack | 14:12 | |
alekibango | lele_: idont really know i dotn even know what is SR and waht is XCP :) | 14:12 |
alekibango | i prefer to not dig into xen | 14:12 |
alekibango | and java | 14:12 |
lele_ | alekibango: jaja, ok , dont dig on to it ... the documentation of integration with xen sucks hard ... | 14:13 |
lele_ | alekibango: SR is storage repository / shared | 14:13 |
alekibango | :) | 14:13 |
lele_ | How do you guys manage High Availability ? | 14:13 |
alekibango | lele_: failure is part of life :) | 14:14 |
alekibango | lele_: i am trying to use sheepdog | 14:14 |
alekibango | if you are talkin about ha for disks | 14:15 |
lele_ | alekibango: :P , actually oracleVM has live-migration and node-ha feature ... we really need to replicate this schema on openstack, but the HA stuff is not pretty documented | 14:15 |
alekibango | some people do RBD | 14:15 |
alekibango | lele_: nasa is not using ha for disks | 14:15 |
alekibango | cloud ready applications do not need reliability | 14:15 |
alekibango | :) | 14:15 |
alekibango | lele_: look on rbd or sheepdog | 14:16 |
alekibango | but i dont know how much luck will you have with XEN | 14:16 |
lele_ | alekibango: i know, we need that if a physical node crash, all the VMs migrate to an active node of the cluster ... but is seems that openstack is pretty far away of this | 14:16 |
alekibango | lele_: openstack supports this | 14:16 |
alekibango | if you use aftercactus trunk | 14:16 |
alekibango | but its not yet used in production | 14:17 |
alekibango | as nasa doesnt need this | 14:17 |
lele_ | alekibango: wow, and it get done automagically too ? | 14:17 |
alekibango | lele_: i dont thnk so, for now | 14:17 |
alekibango | but it will this year, i am sure | 14:17 |
alekibango | its too much magic to get running reliably without hard work | 14:18 |
alekibango | and local drives are faster | 14:18 |
lele_ | alekibango: thats cool , ill install a KVM test node today to see the diferrences and compatibilitys | 14:18 |
alekibango | lele_: good luck :) | 14:18 |
alekibango | tell me how it worked | 14:18 |
alekibango | (or didnt, lol) | 14:19 |
lele_ | alekibango: lol , we got nfs netapp mounts, with SAS disk, i think that should work fast, i hope that KVM dont break my patience out as XEN did ... | 14:20 |
*** Zangetsue has joined #openstack | 14:21 | |
alekibango | :) | 14:23 |
*** jkoelker has joined #openstack | 14:23 | |
*** zenmatt has joined #openstack | 14:28 | |
NashTrash | notmyname: It appears I have my whole Swift cluster (with HA) up and running. I want to have S3 compatibility. I can not find any documentation on how to add this in. Any pointers? | 14:33 |
*** adjohn has quit IRC | 14:33 | |
notmyname | NashTrash: the swift3 middleware | 14:33 |
NashTrash | notmyname: Right, I saw that it exists, but nothing on how to set it up. | 14:34 |
notmyname | NashTrash: however, be aware that the S3 compatibility will always lag and be a 2nd class citizen | 14:34 |
NashTrash | notmyname: Sure. I understand. | 14:34 |
notmyname | hmm...I don't see a sample config | 14:36 |
notmyname | creiht: ^? | 14:37 |
*** kakella has joined #openstack | 14:37 | |
*** kakella has left #openstack | 14:37 | |
*** dendro-afk is now known as dendrobates | 14:39 | |
Eyk | https://answers.launchpad.net/swift/+question/154332 <-- at the end I wrote how to get s3 running | 14:40 |
xavicampa | hi! anyone knows how to boot an image (snapshot) created with "nova image-create"? I'm using glance, "nova image-list" does not list it, "glance index" in the glance node neither, only "glance show 38" shows it | 14:41 |
notmyname | Eyk: thanks. I just wasn't sure what was required in the filter section | 14:41 |
*** mgoldmann has quit IRC | 14:42 | |
*** derrick_ has quit IRC | 14:43 | |
Eyk | it took me some time to get it working, all I could find on the web was wrong ;-) | 14:43 |
NashTrash | Eyk: Thanks. I will take a look at it. | 14:44 |
NashTrash | notmyname: Thanks too. | 14:44 |
creiht | NashTrash: yeah that should get you going | 14:45 |
creiht | btw, the compatibility layer is still a bit experimental | 14:45 |
NashTrash | creiht: Excellent. Thanks all. | 14:45 |
uvirtbot | New bug: #781716 in nova "Post install check assumes dpkg-vendor" [Undecided,New] https://launchpad.net/bugs/781716 | 14:46 |
*** dendrobates is now known as dendro-afk | 14:48 | |
annegentle | Eyk: thanks for writing up the S3 config, I'll take a look and try to roll it into the docs | 14:48 |
mtaylor | morning jaypipes | 14:50 |
creiht | mtaylor: !!!!! | 14:50 |
creiht | mtaylor: the swift ppas are still a 1.2 | 14:50 |
mtaylor | creiht: I thought soren was supposed to be doing your packages ... | 14:51 |
creiht | makes me a sad panda :( | 14:51 |
* creiht has no idea | 14:51 | |
mtaylor | creiht: this may again be one of those times when each of us thought the other was taking care of it | 14:51 |
mtaylor | creiht: he's a few rooms over right now - I'll find him and sort it out | 14:51 |
creiht | mtaylor: cool, and thanks | 14:51 |
*** dragondm has joined #openstack | 14:53 | |
*** thatsdone has joined #openstack | 14:53 | |
deshantm | lele_: what were you trying to do with XCP that you couldn't do? | 14:55 |
lele_ | deshantm: hi, it seems that the nova-compute running on the XCP node is not seeing the repo | 14:57 |
lele_ | so the VM launches, but doest start | 14:57 |
lele_ | keeps in shutdown mode | 14:57 |
lele_ | StorageRepositoryNotFound: Cannot find SR to read/write VDI. | 14:58 |
*** thatsdone has quit IRC | 14:58 | |
lele_ | but the SR is there ... | 14:58 |
deshantm | lele_: what versions of everything are you running? did you follow the openstack wiki XenServer howto? | 14:58 |
lele_ | yep i followed it | 14:59 |
lele_ | cactus version of nova | 14:59 |
lele_ | XCP last version | 14:59 |
deshantm | lele_: there was a thread on the openstack-operators list about this | 14:59 |
deshantm | was that you? | 14:59 |
lele_ | yep, that was me :) | 14:59 |
lele_ | i attached the stack trace | 14:59 |
*** thatsdone has joined #openstack | 15:00 | |
deshantm | lele_: I gotta run to a meeting, but I can try to get you in contact with the right people | 15:00 |
lele_ | thats cool ! | 15:00 |
deshantm | lele_: perphaps mcclurmc has some insights | 15:00 |
lele_ | great, could you reply to me email with the contact ? | 15:00 |
deshantm | lele_: I was the one that responded a bit to that thread | 15:00 |
deshantm | :) | 15:00 |
lele_ | :) yep, but anyonelse replied after the trace :( | 15:01 |
deshantm | lele_: to me that looked like a nova issue, I don't have the experience with that part myself | 15:01 |
deshantm | I'll try to pull in others who might have hints | 15:01 |
deshantm | lele_: late for a meeting now though, talk to you later | 15:02 |
lele_ | thanks thats a big help ! | 15:02 |
lele_ | have a great meeting | 15:02 |
*** thatsdone has quit IRC | 15:06 | |
*** thatsdone has joined #openstack | 15:06 | |
*** zenmatt has quit IRC | 15:06 | |
*** xavicampa has quit IRC | 15:07 | |
*** mancdaz has joined #openstack | 15:08 | |
gholt | mtaylor: I'm not sure why the planet openstack feed always shows you as the author of my posts. The planet page itself shows fine. Hopefully I'm not too embarassing to you. ;P | 15:11 |
mtaylor | gholt: hrm. that's weird | 15:12 |
*** RoAkSoAx has joined #openstack | 15:12 | |
*** citral has quit IRC | 15:14 | |
gholt | nelson: One of the guys just noticed that https://twitter.com/#!/russnelson/status/68050129105592320 401s atm | 15:14 |
*** msivanes has quit IRC | 15:18 | |
creiht | NashTrash: http://swift.openstack.org/misc.html#module-swift.common.middleware.swift3 | 15:19 |
creiht | has an example boto config | 15:19 |
creiht | erm connection | 15:19 |
*** zenmatt has joined #openstack | 15:19 | |
*** Shentonfreude has joined #openstack | 15:19 | |
*** thatsdone has quit IRC | 15:23 | |
*** guigui has quit IRC | 15:24 | |
dobber | so, is there a way to "mount" an object in swift? | 15:25 |
*** enigma has joined #openstack | 15:25 | |
notmyname | dobber: not as part of swift itself, but there are some 3rd party tools that do something like that | 15:26 |
dobber | cool | 15:26 |
notmyname | cloudfuse is a fuse module that can talk to a swift cluster | 15:27 |
notmyname | but realize that mounting a swift cluster is generally a bad idea. it's not designed to be used as a block-level device | 15:28 |
radek | all my vm's have 100M network although server nic is 1G | 15:28 |
dobber | yeah i know | 15:28 |
dobber | but i'm still looking for ways to use it | 15:28 |
radek | its a single server installation with default vlan mode | 15:28 |
notmyname | ok :-) | 15:28 |
radek | is it normal ? | 15:28 |
radek | i can't find out whats wrong with it | 15:30 |
*** deepy has quit IRC | 15:31 | |
NashTrash | creiht: Nice. Thanks. | 15:32 |
*** deepy has joined #openstack | 15:32 | |
radek | any one seen this issue ? | 15:33 |
*** jwilmes has joined #openstack | 15:35 | |
*** joearnold has joined #openstack | 15:36 | |
*** maplebed has joined #openstack | 15:37 | |
nelson | gholt: yeah, it's in an ... uncertain state. | 15:40 |
*** rnirmal has joined #openstack | 15:43 | |
*** rchavik has quit IRC | 15:43 | |
gholt | Heh :) | 15:43 |
*** arun_ has quit IRC | 15:44 | |
*** dobber has quit IRC | 15:45 | |
nhm | any of you guys doing VMs on magnycours? | 15:45 |
*** magglass1 has quit IRC | 15:46 | |
*** msivanes has joined #openstack | 15:48 | |
jaypipes | mtaylor: morning :) | 15:49 |
uvirtbot | New bug: #781756 in nova "AuthToken server_management_url spelled incorrectly" [Undecided,New] https://launchpad.net/bugs/781756 | 15:51 |
*** daveiw has quit IRC | 15:53 | |
katkee | hello, is there a document describing ebtables iptables nat bridges and the networking modes in openstack? | 15:55 |
*** scyld has joined #openstack | 15:56 | |
*** skiold has quit IRC | 15:58 | |
*** scyld is now known as skiold | 15:58 | |
NashTrash | creiht: Do you have a moment for two quick Swift questions? | 15:59 |
*** arun_ has joined #openstack | 15:59 | |
*** arun_ has joined #openstack | 15:59 | |
NashTrash | Anyone open for a Swift question? | 16:01 |
notmyname | don't ask to ask. just ask | 16:02 |
joearnold | Depends on the question. | 16:03 |
*** Zangetsue has quit IRC | 16:03 | |
NashTrash | Ok. I followed the directions for creating the first Swift user (root). I then added a second proxy server and followed the directions to change root's URL. Now the stat command fails with "Account not found" | 16:04 |
NashTrash | I can still curl and get responses | 16:04 |
*** zenmatt has quit IRC | 16:06 | |
NashTrash | My other question is when I try swauth-add-account I get the following error: "Account creation failed: 501 Not Implemented". Is account creation really not implemented? | 16:06 |
notmyname | the add account error is probably a config issue | 16:06 |
notmyname | do you have allow_account_management set in the proxy server? | 16:07 |
Eyk | maybe you set the wrong root url,first question | 16:07 |
*** deepy has quit IRC | 16:07 | |
*** deepy has joined #openstack | 16:07 | |
NashTrash | notmyname: Yes, allow_account_management = true | 16:07 |
*** mattray has joined #openstack | 16:09 | |
NashTrash | Eyk: If I run swauth-list on the system account I see the following: {"services": {"storage": {"default": "local", "local": "https://99.99.99.94:8080/auth/"}}, "account_id": "AUTH_59d7770d-41d9-4e2d-9cd6-e4a877ebcb1d", "users": [{"name": "root"}]} | 16:09 |
vishy | katkee: script should work fine on bare metal. You will have to change libvirt_type to kvm | 16:10 |
*** zigo-_- has quit IRC | 16:10 | |
*** MarkAtwood has joined #openstack | 16:11 | |
NashTrash | notmyname: Here is my proxy config - http://paste.openstack.org/show/1333/ | 16:11 |
*** johnpur has joined #openstack | 16:13 | |
*** ChanServ sets mode: +v johnpur | 16:13 | |
Eyk | NashTrash, some mistake I did, the new url you set need the account key at the end | 16:13 |
gholt | NashTrash: I'm grepping through the code and I see nowhere a 501 can occur. You sure it was a 501? | 16:14 |
NashTrash | Eyk: Ah, you mean the AUTH_59... should be after /auth/? | 16:16 |
Eyk | NashTrash, "local": "https://99.99.99.94:8080/v1/AUTH_59d7770d-41d9-4e2d-9cd6-e4a877ebcb1d" this should the right output | 16:16 |
NashTrash | Eyk: Ok, one second... | 16:16 |
*** dirkx_ has quit IRC | 16:16 | |
NashTrash | Eyk: Hm...I think I screwed that up. Please take a look - http://paste.openstack.org/show/1334/ | 16:22 |
*** enigma has quit IRC | 16:23 | |
NashTrash | Eyk: Ah! Missing v1 after auth | 16:23 |
Eyk | compare my string with the one you used | 16:24 |
Eyk | you get it ;-) | 16:24 |
*** enigma has joined #openstack | 16:25 | |
Eyk | many things are not clear in the documentation, this were the only obstacles setting up swift ;-) | 16:25 |
NashTrash | Eyk: And stat is now working. Thank you. | 16:29 |
NashTrash | gholt: Were you talking about my inability to create accounts? | 16:29 |
gholt | NashTrash: I think so, but just generally speaking I can't find where a 501 could happen from Swift. | 16:30 |
katkee | vishy: we have 2 bare metal servers running openstack installed with your script... we are now fighting with network config to understand ebtables, iptables, vlanmanager etc | 16:30 |
radek | anyone has any ideas why my vm's have only 100MB nic speed I'm using default vlan mode networking ? | 16:31 |
*** grapex has joined #openstack | 16:31 | |
NashTrash | gholt: Well, I got it. Here is the command- swauth-add-account -A https://99.99.99.94:8080/auth/ -K XXXXXXXX testaccount | 16:31 |
NashTrash | gholt: And the result was "Account creation failed: 501 Not Implemented" | 16:31 |
*** nacx has quit IRC | 16:32 | |
*** photron_ has joined #openstack | 16:32 | |
Eyk | is this the right full syntax for this command? | 16:33 |
vishy | radek: you can get up to about 600MB if you use virtio | 16:34 |
radek | any docs how to do it ? | 16:34 |
radek | do I have do configure nova.conf or its on server side ? | 16:35 |
gholt | NashTrash: Do you have something else running on 99.99.99.94 port 8080? A load balancer, nginx or something? I deliberately broke my install and got a 500 instead of 501... | 16:35 |
NashTrash | gholt: Yes. 99.99.99.94:8080 is my load balancer. But other commands (swauth-list for example) seem to work fine. | 16:36 |
gholt | Ah, okay. Well I'm not sure, but if you have multiple proxies it could be one of them is running with account management off. You might've changed the proxy conf and forgot to reload or something [just guessing] | 16:37 |
vishy | radek: you can edit libvirt.xml.template and take out the comments around the virtio line | 16:38 |
vishy | radek: you just have to make sure that the guests you use support virtio | 16:38 |
NashTrash | gholt: Just checked and both are set to True. | 16:38 |
radek | i've tried that do i need to reboot their instance after ?? | 16:38 |
NashTrash | I will be afk for lunch. Thanks for all the help. | 16:38 |
radek | sorry after I've changed that I restart the instance ? | 16:39 |
*** zenmatt has joined #openstack | 16:39 | |
gholt | NashTrash: Np, if you get the chance, just restart both proxy services and see if that helps. Just in case... | 16:39 |
gholt | NashTrash: If not, check the logs on each proxy and see if they give any hints to what's wrong. | 16:39 |
raggi_ | is this channel moderated and i'm not noticing? | 16:41 |
*** markvoelker has quit IRC | 16:41 | |
vishy | radek: you need to recreate instances | 16:42 |
vishy | that xml template needs to be changed on all of the compute nodes | 16:43 |
radek | recreate ? | 16:43 |
vishy | yes destroy the vm | 16:43 |
vishy | make a new one | 16:43 |
radek | xml template is held with image ? | 16:44 |
*** enigma has quit IRC | 16:44 | |
vishy | yup | 16:44 |
raggi_ | is there a way to rebuild the `security_group_instance_association` table? | 16:44 |
*** Eyk has quit IRC | 16:44 | |
vishy | you can manually change it | 16:44 |
vishy | actually euca-reboot-instances would probably work after changing the template | 16:44 |
*** markvoelker has joined #openstack | 16:44 | |
radek | where is xml template stored ? | 16:45 |
vishy | raggi_: automatically? I don't think so | 16:45 |
vishy | radek: depends on how you installed | 16:45 |
raggi_ | vishy: i have a couple of old instances, i'd like to apply the security group changes to | 16:45 |
radek | defulat ubuntu install | 16:45 |
*** dirkx_ has joined #openstack | 16:45 | |
*** jkoelker has quit IRC | 16:45 | |
radek | from package manager | 16:45 |
raggi_ | vishy: i altered the iptables rules, which worked fine until a new (different) instance was booted, and then it regenerated all of the nova chains | 16:45 |
raggi_ | unfortunately, it regenerated the chains using stale data | 16:46 |
radek | where is template xml by default ? | 16:46 |
vishy | raggi_: you manually altered the rules? | 16:46 |
raggi_ | is there a way to re-apply a security group to an instance? | 16:46 |
vishy | radek: looking | 16:46 |
radek | ok thx | 16:46 |
raggi_ | vishy: yes, but that's been corrected | 16:46 |
raggi_ | vishy: (iptables was flushed and rewritten by the instance boot) | 16:46 |
vishy | radek: might be faster to just do a locate | 16:47 |
radek | whats the name of the template | 16:47 |
radek | ? | 16:47 |
vishy | libvirt.xml.template | 16:48 |
radek | thx | 16:48 |
vishy | should be in nova/virt dir inside of python | 16:48 |
*** katkee has quit IRC | 16:48 | |
vishy | raggi_: if you change a rule in a group it will update the rules on any instance that is using it | 16:49 |
raggi_ | vishy: that doesn't seem to be working | 16:49 |
raggi_ | vishy: we only have one security group | 16:49 |
vishy | raggi_: really? | 16:49 |
vishy | raggi_: you might need to restart nova-compute first | 16:49 |
*** kbringard has joined #openstack | 16:49 | |
vishy | to and run a new instance on the host | 16:50 |
vishy | so that it creates the basic rules? | 16:50 |
raggi_ | actually, there's an odd rule in there: http://pastie.textmate.org/private/bcuinymssbtubwlmbelllw | 16:50 |
raggi_ | could that grpname rule be causing a problem? | 16:50 |
kbringard | hey guys! quick question about zones? | 16:50 |
kbringard | do I need to set more than --node_availability_zone in my nova.conf to change the name of the zone? | 16:50 |
kbringard | and/or is there any documentation about zone setup? I found some random stuff in the developer docs, but nothing super useful :-/ | 16:51 |
*** jkoelker has joined #openstack | 16:51 | |
raggi_ | vishy: right on the money, restarting nova-compute did it, thanks very much | 16:51 |
raggi_ | didn't occur to me to try that :) | 16:52 |
*** koolhead17 has joined #openstack | 16:52 | |
*** jkoelker has quit IRC | 16:54 | |
*** jkoelker has joined #openstack | 16:57 | |
*** mattray has quit IRC | 16:59 | |
*** obino has joined #openstack | 17:00 | |
vishy | raggi_: cool | 17:00 |
vishy | kbringard: availability_zones or distributed_zones? | 17:01 |
kbringard | uhm... that is a good question | 17:01 |
kbringard | availability_zone I would assume... is there a doc explaining the difference? (sorry, new to zones in OpenStack) | 17:01 |
vishy | kbringard: i think just the conf file change and you might have to use the zone scheduler to get any usefulness out of them | 17:02 |
kbringard | essentially, I'm looking to start implementing what's outlined in the MultiClusterZones section of the wiki | 17:02 |
kbringard | http://wiki.openstack.org/MultiClusterZones | 17:02 |
vishy | kbringard: oh no that is actually distributed zones | 17:03 |
vishy | which work although the scheduler is still being finished | 17:03 |
*** mattray has joined #openstack | 17:05 | |
vishy | kbringard: not sure if there is a good how to for how to use those zones. dabo or sandywalsh might have some insight | 17:06 |
kbringard | cool, thanks, I'll keep messing with | 17:06 |
kbringard | I don't mind figuring it out, but if there are docs that help, I'm all for using them :-D | 17:06 |
kbringard | if I come up with anything meaningful I'll start a wiki page about it | 17:07 |
sandywalsh | vishy, kbringard I added docs on Zones to the Cactus release | 17:07 |
kbringard | sandywalsh: in the admin guide, or developer guide? | 17:08 |
anticw | notmyname: inode64 won't matter because you have large inodes | 17:08 |
anticw | 1k inodes means you can cover i think 4 or 8tb drivers evenly in <= 32 bits | 17:09 |
sandywalsh | kbringard, little of both ... trying to find a link, sec | 17:09 |
sandywalsh | annegentle, where would I find a link to the zones docs I added to Cactus? | 17:10 |
notmyname | anticw: we've seen older drives in the cluster slow down, even though they have the same amount of data on them as newer drives. we came across the inode64 thing last night when I ran out of inodes on my SAIO and wondered if it may help with the drive slowness | 17:10 |
*** enigma has joined #openstack | 17:10 | |
anticw | ran out of inodes? | 17:11 |
anticw | they are dynamically allocated | 17:11 |
anticw | you probably hit imaxpct=25 | 17:11 |
anticw | which is the default | 17:11 |
notmyname | err..actually, that's a bit of a guess. I got "out of room" errors | 17:11 |
anticw | and if you did that's a bit horrific | 17:11 |
notmyname | and the mount had plenty of space left on it | 17:11 |
anticw | 2TB drive? 1k inods? | 17:11 |
notmyname | no, this was on my all-in-one. running on a slicehost VM | 17:12 |
*** kashyap has quit IRC | 17:12 | |
notmyname | loopback device | 17:12 |
anticw | ok so i doubt you did | 17:12 |
anticw | i think you hit imaxpct then | 17:12 |
notmyname | ok. like I said. just a guess | 17:12 |
anticw | so you can tweak that up | 17:12 |
notmyname | what i imaxpct? | 17:12 |
notmyname | s/i/is | 17:12 |
anticw | how much space inodes can use | 17:12 |
kbringard | sandywalsh: I did find http://wiki.openstack.org/MultiClusterZones | 17:13 |
anticw | you sell disks blocks not inodes ... so if you bump into htat i would be worried | 17:13 |
kbringard | but that's more of a blueprint and less documentation | 17:13 |
notmyname | we haven't seen it in production. just my really old saio | 17:13 |
sandywalsh | kbringard, that's the bp spec, but I documented the implementation a while ago ... still looking | 17:13 |
kbringard | no worries, thanks for the hlpe | 17:14 |
kbringard | help* | 17:14 |
anticw | notmyname: lots of deltes? | 17:14 |
anticw | tombstones burn inodes ... | 17:14 |
anticw | gah | 17:14 |
anticw | latency horrible here, typing sucks | 17:14 |
sandywalsh | kbringard, this is the file, but I don't know why it's not in the published docs: http://bazaar.launchpad.net/~hudson-openstack/nova/trunk/view/head:/doc/source/devref/zone.rst | 17:14 |
notmyname | ya, lot's of everything. the saio was probably 6 months old. so 6 months of testing, etc | 17:14 |
*** jdurgin has joined #openstack | 17:14 | |
*** dirkx_ has quit IRC | 17:15 | |
anticw | df -hi | 17:15 |
kbringard | sandywalsh: awesome, thanks! I'll read over it | 17:15 |
anticw | and see how many you have | 17:15 |
anticw | 1k each ... and your block device is some size to ... check the math | 17:15 |
sandywalsh | kbringard, it's more admin related, my summit zones presentation is more dev focused. Let me know if you need the link | 17:15 |
notmyname | ya, that didn't say I was out of room, but docs I found said that df -i lies (or doesn't tell the whole truth) | 17:15 |
notmyname | I rebuilt it this morning, so I can't go back and check now | 17:15 |
kbringard | sandywalsh: thanks, not yet... I'm just looking to implement the building blocks at this point. We have a single zone for now but will likely be looking to implement more (in the huddle fashion you spoke of in the blueprint) | 17:16 |
sandywalsh | cool | 17:16 |
sandywalsh | let me know if you have any questions | 17:17 |
*** watcher has quit IRC | 17:17 | |
*** koolhead17 has quit IRC | 17:17 | |
*** koolhead17 has joined #openstack | 17:17 | |
anticw | df -i sorta lies | 17:18 |
anticw | but it's close enough | 17:18 |
anticw | cw@naught:~$ sudo find / -xdev | wc -l | 17:19 |
anticw | 154766 | 17:19 |
anticw | /dev/sda5 xfs 4976768 154290 4822478 4% / | 17:19 |
anticw | 154766 ~ 154290 | 17:19 |
anticw | so lies, i dunno, that's a bit strong | 17:19 |
*** mgoldmann has joined #openstack | 17:22 | |
*** mszilagyi has joined #openstack | 17:23 | |
kbringard | sandywalsh: so is the top level zone always called nova? | 17:23 |
kbringard | basically: currently I have one zone, I set --zone-name= in the nova.conf and restarted everything on the controller | 17:24 |
sandywalsh | kbringard, all zones are called 'nova' by default. --zone_name=foo to change it | 17:24 |
kbringard | then I did a nova zone-add on said controller | 17:25 |
kbringard | but it's still showing the zone as nova | 17:25 |
sandywalsh | do you have --zone_name or --zone-name? | 17:25 |
kbringard | and zone-info is only showing the basic stuff... same as before I had any child zones | 17:25 |
*** Kronick has joined #openstack | 17:25 | |
kbringard | ohhhhh, haha | 17:25 |
kbringard | <--- dumb | 17:25 |
sandywalsh | :) common mistake | 17:25 |
sandywalsh | and it takes about 30 seconds for the updates to come through too | 17:26 |
kbringard | yea, I was noticing | 17:26 |
kbringard | that's fine though | 17:26 |
kbringard | so then I just bring up API servers wherever I want a zone, name them accordingly and zone-add them to the parent | 17:27 |
kbringard | right? | 17:27 |
*** zenmatt has quit IRC | 17:28 | |
kbringard | sorry, hopefully one last thing... do I set the zone properties manually in the DB? | 17:29 |
kbringard | oh, and what flag is it to change the availability zone name? | 17:29 |
*** mgoldmann has quit IRC | 17:31 | |
*** mgoldmann has joined #openstack | 17:31 | |
kbringard | or does it just set it based on the node_availability_zone of the node when it registers? | 17:32 |
*** zenmatt has joined #openstack | 17:33 | |
*** dirkx_ has joined #openstack | 17:33 | |
notmyname | anticw: thanks. good info to know | 17:36 |
*** pguth66 has joined #openstack | 17:38 | |
*** xavicampa has joined #openstack | 17:38 | |
*** Eyk has joined #openstack | 17:40 | |
NashTrash | gholt: I am back and figured out the 501 Not Implemented error. Turns out the Pound load balancer defaults to only allowing GET, POST, and HEAD. swauth-add-account uses PUT. | 17:44 |
gholt | NashTrash: Ah, that's no fun. But I guess not a real big dig deal if you can just do account management directly shelled into one of the proxies to 127.0.0.1. | 17:45 |
notmyname | of couse, that's going to make object creation hard | 17:46 |
NashTrash | gholt: I was able to reconfigure Pound to allow POST. All is good. | 17:46 |
gholt | Hah, I missed the kinda obvious there, yeah... :) | 17:46 |
NashTrash | gholt: Showed up buried in the Pound log | 17:47 |
gholt | You'll need PUT as well. COPY is "nice" but not 100% /required/. | 17:47 |
NashTrash | gholt: All set I think. Now time for some more testing | 17:47 |
gholt | Does pound spool PUTs? If so, that'd be bad. | 17:47 |
*** markvoelker has quit IRC | 17:48 | |
*** dprince has quit IRC | 17:48 | |
NashTrash | gholt: I do not think so. | 17:49 |
notmyname | I think Pound is ok. It was our 2nd choice for LB (1st choice for open source LBs) | 17:49 |
gholt | anticw: If you ever want me to run some xfs stuff on a "was in" production drive, let me know. I'll have the ops guys pull a drive and we can mess with it to see what's going on. | 17:49 |
NashTrash | Can one swift user be in multiple accounts? | 17:49 |
notmyname | NashTrash: depends on your auth, but not with swauth | 17:49 |
NashTrash | notmyname: Not a big issue. Just curious. | 17:50 |
gholt | NashTrash: No, a user is bound to an account for swauth. You'd have to make the user in each account. :/ Or you could give that user access to the other accounts (better). | 17:50 |
notmyname | my point is that is an auth "Feature" not a swift-specific thing | 17:50 |
NashTrash | gholt: Hm..how do you give a user access to another account? | 17:51 |
*** mahadev has joined #openstack | 17:51 | |
*** mahadev has left #openstack | 17:51 | |
NashTrash | notmyname: Makes sense | 17:51 |
*** dirkx_ has quit IRC | 17:51 | |
gholt | NashTrash: st post -r 'account:user' -w 'account:user' container | 17:52 |
NashTrash | gholt: Thanks. I will have to play around with that. | 17:53 |
gholt | NashTrash: For a little more info: http://swift.openstack.org/misc.html#acls | 17:53 |
nelson | wow. Should my /srv/node/sda3/objects subdirectory have 120K files in it? That seems excessive. | 17:54 |
*** dprince has joined #openstack | 17:54 | |
gholt | Heh, I guess that depends on how many files you have stored. | 17:55 |
nelson | 300k files, but we plan to store 256 times as many. | 17:55 |
*** skiold has quit IRC | 17:56 | |
anticw | 120k isn't that many | 17:56 |
NashTrash | gholt: I created a new user in the system account. But I get a 403 Forbidden when I try to use that user with swauth-list on system. | 17:56 |
gholt | nelson: And how many nodes? 300k objects * 3 replicas / devices | 17:56 |
NashTrash | gholt: The .super_admin works fine | 17:56 |
nelson | six devices | 17:57 |
gholt | NashTrash: Only the super admin and reseller admins can use swauth-list iirc | 17:57 |
anticw | gholt: that would be good, ill take you up on that but probably not for a few days as there are some fires i have to put out | 17:57 |
gholt | nelson: Sounds about right then. :) | 17:57 |
*** dirkx_ has joined #openstack | 17:57 | |
*** skiold has joined #openstack | 17:57 | |
NashTrash | gholt: Ok. | 17:57 |
nelson | as long as xfs can handle that many, yeah. | 17:58 |
*** kashyap has joined #openstack | 17:58 | |
gholt | anticw: Cool, I have a feeling it'll take a while to get ahold of a drive anyway. Have to pull it, ship from Dallas, convince people I'm not doing something evil, etc. :) | 17:58 |
NashTrash | Thanks everyone. Off to meetings! | 17:59 |
*** NashTrash has quit IRC | 17:59 | |
*** katkee has joined #openstack | 17:59 | |
Eyk | is there any filecountlimit in swift? | 18:01 |
Eyk | objects in a container or so | 18:02 |
gholt | Eyk: It depends on your container server performance. But we've been recommending staying under 10 million objects per container. There's no hard-coded limit. | 18:03 |
notmyname | run your container servers on SSD drives and then you have have 1 billion+ objects with no problem | 18:04 |
gholt | Tested and true. :) ^^ | 18:04 |
*** clauden_ has joined #openstack | 18:05 | |
*** fabiand__ has joined #openstack | 18:06 | |
*** BK_man has joined #openstack | 18:12 | |
devcamca- | can someone remind me how to add someone to the openstack mailing list? been a looong time since i had to deal with that | 18:12 |
*** devcamca- is now known as devcamcar | 18:12 | |
*** dirkx_ has quit IRC | 18:12 | |
gholt | devcamcar: http://wiki.openstack.org/MailingLists :) | 18:13 |
devcamcar | gholt: too easy! :) | 18:13 |
*** dirkx_ has joined #openstack | 18:14 | |
annegentle | sandywalsh kbringard the zones doc Sandy did is here: http://nova.openstack.org/devref/zone.html, no doc in docs.openstack.org yet. | 18:19 |
kbringard | thanks annegentle~ | 18:19 |
kbringard | ! | 18:19 |
*** infinite-scale has joined #openstack | 18:20 | |
annegentle | sandywalsh: I'll add zone.rst to the devref page that pulls it all into the nav, etc. | 18:20 |
nelson | gholt: I have an auth question. Don't shoot me. It looks like the auth server is return 200 OK, but php-cloudfiles is expecting a 204. You familiar with that? | 18:20 |
Eyk | will the resource consumption increase a lot with more files or is this no problem. So 1Billion objects will consum probably 1000 times more resources then 1 Million or less ? | 18:21 |
btorch | nelson: I think it's because on 1.3 (swauth) it returns 204s but on the old one 1.2 (devauth) it returned 200s | 18:21 |
*** galthaus has joined #openstack | 18:21 | |
btorch | nelson: sorry the other way around | 18:22 |
nelson | btorch: phwew! | 18:22 |
*** galthaus has joined #openstack | 18:22 | |
nelson | that's not a problem, I'll just fix php-cloudfiles. | 18:23 |
btorch | nelson: 1.3 (swauth) it returns 200s during auth | 18:23 |
nelson | convincing php-cloudfiles to accept 200 works. now I'm down into problems in my own code. :) | 18:25 |
uvirtbot | New bug: #781837 in swift "IPv6 compressed notation and replication" [Undecided,New] https://launchpad.net/bugs/781837 | 18:26 |
*** cp16net has joined #openstack | 18:26 | |
*** MarkAtwood has quit IRC | 18:27 | |
btorch | nelson: was that on the latest php api ? | 18:28 |
nelson | kinda | 18:29 |
nelson | checked out of the git repository March 31st. So a month and a half old. | 18:30 |
nelson | btorch: are you familiar with that code? | 18:31 |
*** mrmartin has joined #openstack | 18:31 | |
mrmartin | re | 18:31 |
btorch | nelson: no I believe conrad takes care of that I think | 18:32 |
*** dirkx_ has quit IRC | 18:32 | |
nelson | Haven't seen conrad around ever. | 18:32 |
nelson | neither here nor on #cloudfiles | 18:33 |
*** tblamer has joined #openstack | 18:33 | |
kbringard | I have an interesting... thing | 18:33 |
kbringard | euca-describe-availability-zones shows the zone I set | 18:33 |
kbringard | but verbose shows the zone as nova still | 18:34 |
j05h | sounds buggish | 18:34 |
btorch | nelson: try chmouel | 18:34 |
nelson | btorch: I think you just did. :) | 18:35 |
btorch | nelson: I see that chmouel and conrad are the ones updating the github .. if I see conrad today I'll let him know too | 18:35 |
btorch | nelson: :) | 18:35 |
nelson | btorch: I have another question: whether it's re-authenticating or not. It looks like the code to do it is commented-out. | 18:35 |
*** katkee has quit IRC | 18:38 | |
*** medberry is now known as med_out | 18:39 | |
gholt | Damn that code that requires 204 instead of just 2xx. :) | 18:40 |
sandywalsh | annegentle, thanks! | 18:40 |
kbringard | awesome | 18:42 |
kbringard | rv = {'availabilityZoneInfo': [{'zoneName': 'nova', | 18:42 |
kbringard | 'zoneState': 'available'}]} | 18:42 |
kbringard | I believe that explains my problem | 18:42 |
kbringard | haha | 18:42 |
sandywalsh | kbringard, yeah, don't confuse availability zones with Zones (bad choice of names) | 18:43 |
sandywalsh | availability zones are logical partitionings *within* a Zone | 18:43 |
kbringard | right, but, I was trying to change the output of the availability zone names | 18:43 |
kbringard | and it looks like with verbose, the parent is always hardcoded to nova | 18:43 |
sandywalsh | ah, yes, that would do it then | 18:43 |
sandywalsh | :) | 18:43 |
kbringard | so, then real quick... | 18:44 |
sandywalsh | another tip with novaclient is to use the --debug option to see what's coming and going to the REST interface | 18:44 |
jaypipes | gah, my air conditioning is hosed. | 18:44 |
sandywalsh | RUN ... jaypipes is going to explode! | 18:45 |
kbringard | a Zone, is an entire setup with an API server, etc... an availability zone is ways to create fault tolerance within a Zone? | 18:45 |
jaypipes | sandywalsh: yeah, no joke :) | 18:45 |
sandywalsh | kbringard, exactly | 18:45 |
kbringard | jaypipes: for some reason I read that as "My hair conditioner" | 18:45 |
mrmartin | Guys, is there anyone here who is involved in Lunr development / planning ? | 18:45 |
kbringard | and I was a bit confused | 18:45 |
jaypipes | kbringard: which would be very funny. | 18:45 |
jaypipes | kbringard: :) | 18:45 |
kbringard | sandywalsh: so I'd want to do something like... location as the Zone, and then location0, location1, location2 as the availability_zone | 18:46 |
*** skiold has quit IRC | 18:46 | |
kbringard | jaypipes: as an aside, we got the chunking stuff working | 18:47 |
kbringard | thanks for the heads up on that :-D | 18:47 |
sandywalsh | kbringard, sounds reasonable ... I haven't tried that. | 18:47 |
jaypipes | kbringard: awesome news! | 18:47 |
sandywalsh | gotta step out for a bit ... will read thread after | 18:47 |
kbringard | hopefully in the next couple of weeks I'll stop being lazy and I'll update ogle with image upload support | 18:47 |
kbringard | thanks sandywalsh | 18:48 |
creiht | mrmartin: howdy | 18:48 |
*** jbryce has joined #openstack | 18:49 | |
creiht | mrmartin: I am the lead for lunr, how can I help you? | 18:49 |
mrmartin | I was in UDS@Budapest today, and saw a presentation about Openstack. | 18:50 |
nelson | Did something change in the % encoding of containers between 1.1 and 1.3 ? | 18:51 |
nelson | Because what used to be images%2Fswift is now being encoded as images%25252Fswift | 18:51 |
nelson | I think they're being encoded twice now. | 18:51 |
gholt | nelson: I think that occurs only in the logs. | 18:52 |
nelson | I think ... maybe ... I'll just throw away the %, since it's just causing problems. | 18:52 |
nelson | gholt: I can totally imagine that the logs are encoding one of them, and something elseis encoding another one. | 18:53 |
nelson | :) | 18:53 |
gholt | Heh, but I don't know of anything API-wise that changed encoding. | 18:53 |
*** Kronick has left #openstack | 18:53 | |
btorch | nelson: I'm not sure about the re-authentication php code being comented out .. I think I used the php api once | 18:53 |
btorch | :) | 18:53 |
nelson | "but I didn't CHAAAAAAAANGE anything" :) | 18:53 |
nelson | btorch: I'm hoping that conrad will know. | 18:54 |
*** joearnold has quit IRC | 18:55 | |
*** cuzoka has joined #openstack | 18:55 | |
*** anotherjesse has joined #openstack | 18:59 | |
*** mattray has quit IRC | 19:02 | |
*** kashyap has quit IRC | 19:04 | |
*** MarkAtwood has joined #openstack | 19:05 | |
*** boncos has joined #openstack | 19:05 | |
*** cuzoka has quit IRC | 19:06 | |
boncos | hi, i try to install openstack (swift) on fedora but got problem when running 'swauth-prep -K' | 19:07 |
boncos | Auth subsystem prep failed: 500 Server Error | 19:07 |
boncos | anybody can guide me to troubleshoot this problem ? | 19:08 |
annegentle | boncos: gholt may be able to help, though I don't know if anyone has tested the Swauth system on Fedora. | 19:08 |
*** enigma has quit IRC | 19:09 | |
notmyname | bryguy: look in the proxy server logs (syslog) | 19:10 |
boncos | annegentle, maybe you can try to help me ? i have no knowledge about python thing | 19:10 |
*** enigma has joined #openstack | 19:11 | |
bryguy | notmyname: I think you meant that for someone else. :) | 19:11 |
nelson | boncos: suggestion: make sure you have set the ownership on /srv/node to swift:swift | 19:11 |
*** joearnold has joined #openstack | 19:11 | |
notmyname | bryguy: sorry | 19:11 |
notmyname | boncos: ^ | 19:12 |
nelson | st is telling me "ValueError: No JSON object could be decoded" when I try to download a file. anybody seen this before? | 19:12 |
nelson | and it's giving me a traceback. That implies a bug in 'st', because it shouldn't be throwing exceptions. | 19:12 |
boncos | nelson, /srv/1 is symlink to /mnt/sdb1/1 (and yes, /mnt/sdb1/1 is owned by swift:swift) | 19:13 |
nelson | that *ought* to work. | 19:14 |
boncos | notmyname, http://fpaste.org/kmBt/ this is my log | 19:14 |
*** fabiand__ has left #openstack | 19:15 | |
*** enigma has quit IRC | 19:15 | |
*** lele_ has quit IRC | 19:15 | |
boncos | line 6-9 ... (header, '\r\n\t'.join(values))#012TypeError: sequence item 0: expected string, int found | 19:17 |
boncos | what trigger that error ? | 19:18 |
boncos | !ping gholt | 19:21 |
openstack | pong | 19:21 |
notmyname | heh | 19:21 |
boncos | anybody have a clue ? | 19:24 |
notmyname | boncos: ya, that's the root of the issue, but I'm in a meeting now. I was hoping gholt would help (I think he's talking to contractors at his house at the moment, though) | 19:26 |
boncos | notmyname, oh okey .. take your time ... i'm not in rush | 19:27 |
*** kbringard_ has joined #openstack | 19:29 | |
*** kbringard has quit IRC | 19:29 | |
*** kbringard_ is now known as kbringard | 19:29 | |
openstackjenkins | Project swift build #257: SUCCESS in 30 sec: http://jenkins.openstack.org/job/swift/257/ | 19:31 |
openstackjenkins | Tarmac: Rename swift-stats-* to swift-dispersion-* to avoid confusion with log stats stuff | 19:31 |
*** mrmartin has quit IRC | 19:36 | |
*** HouseAway is now known as AimanA | 19:36 | |
*** katkee has joined #openstack | 19:37 | |
*** woostert has quit IRC | 19:40 | |
*** moreno has joined #openstack | 19:40 | |
moreno | hi all | 19:40 |
moreno | cuold any one help me with one problem? | 19:41 |
*** imsplitbit has joined #openstack | 19:42 | |
moreno | I have some trobles when trying to communicate with an instance | 19:43 |
*** ctennis has quit IRC | 19:44 | |
*** moreno has quit IRC | 19:47 | |
*** katkee has quit IRC | 19:48 | |
*** katkee has joined #openstack | 19:49 | |
gholt | boncos: I'm not sure if it's because of python 2.7; we currently only test with 2.6.5 | 19:50 |
*** katkee has quit IRC | 19:51 | |
gholt | boncos: That seems a little suspect, but the error doesn't quite. So I'm still guessing right now. | 19:52 |
boncos | gholt, mmhh ... ok i'll try to install on centos | 19:52 |
boncos | gholt, centos is using python 2.4 ... is that ok ? | 19:55 |
notmyname | boncos: py2.4 won't work | 19:55 |
boncos | uuhh .. | 19:56 |
notmyname | for example, we use context managers (the with statement) a lot | 19:56 |
gholt | Do you hate Ubuntu 10.04 LTS that much? ;P | 19:56 |
*** lorin1 has left #openstack | 19:56 | |
boncos | i'm not familiar with ubuntu | 19:56 |
*** lorin1 has joined #openstack | 19:57 | |
boncos | that's it | 19:57 |
gholt | Joking, btw, these things /should/ work. I'm going to do some experiments and see if I can figure what's up. | 19:57 |
boncos | not hate it | 19:57 |
boncos | : | 19:57 |
boncos | :) | 19:57 |
gholt | One option you can try, if you're a coder-type, is changing the putheader calls in swift/common/bufferedhttp.py to use str(value) instead of just value. | 19:58 |
*** katkee has joined #openstack | 19:58 | |
gholt | But I just don't know why that'd affect you and no one else yet. | 19:58 |
*** brd_from_italy has joined #openstack | 20:00 | |
boncos | i'm not python coder, but i can program a bit | 20:01 |
*** anotherjesse has quit IRC | 20:01 | |
*** keny has joined #openstack | 20:04 | |
keny | hi everyone | 20:04 |
*** amccabe has quit IRC | 20:05 | |
boncos | gholt, i already change bufferedhttp.py ... still error | 20:06 |
gholt | And you restarted all the services after? That's weird. How would it get an int if you explicitly told it str(x)... | 20:06 |
boncos | btw, i change this bufferedhttp.py , (bufferedhttp.pyc <-- what is this file ?) | 20:06 |
boncos | ow, i should restart the service ? | 20:07 |
gholt | That's auto-updated "compiled" version. Python should take care of that for you. | 20:07 |
boncos | ok .. i'll restart | 20:07 |
gholt | Oh, yeah, restart the proxy at least: swift-init proxy restart | 20:07 |
keny | I am trying to run nova on a system with xen. I have tested that xen runs fine (can create VM images, run them fine, etc). When I try to run one on openstack, spawn fails and machine is shutdown. | 20:08 |
keny | I have an extract of nova-compute.log http://pastebin.com/WgVD1AMi | 20:08 |
boncos | gholt, ah... no error after i change bufferedhttp.py and restart the proxy | 20:09 |
*** Kronick has joined #openstack | 20:09 | |
*** RJD22|away is now known as RJD22 | 20:09 | |
keny | also, my nova.conf: http://pastebin.com/mSsT5Xkq | 20:09 |
gholt | Interesting. I'm actually completely unsure why it doesn't blow up with python 2.6. But I will put in a bug and fix so it works with 2.7 (and keeps working with 2.6). | 20:10 |
vishy | keny: you need -nouse_cow_images | 20:10 |
vishy | -- that is | 20:10 |
*** Kronick has left #openstack | 20:10 | |
keny | One thing that catched my eye is that in the first log qemu is launched | 20:10 |
vishy | althought that doesn't look like the particular bug you are hitting | 20:10 |
keny | vishy thanks for the tip, I will include that | 20:10 |
vishy | cow images don't work with xen/libvirt | 20:10 |
*** dobber_ has joined #openstack | 20:10 | |
boncos | gholt, but thereis another error ... i'll paste it to pastebin | 20:10 |
*** infinite-scale has quit IRC | 20:11 | |
*** llang629 has joined #openstack | 20:11 | |
boncos | gholt, http://fpaste.org/5G8T/ | 20:12 |
gholt | boncos: Okay, that's the other services I didn't have to restart, lol. | 20:13 |
gholt | boncos: So try a swift-init main restart | 20:14 |
boncos | ok | 20:14 |
*** watcher has joined #openstack | 20:15 | |
*** johnpur has quit IRC | 20:15 | |
boncos | ah you right .. no error then :D | 20:15 |
boncos | thanks a lot gholt | 20:15 |
*** llang629_ has joined #openstack | 20:16 | |
gholt | boncos: Ah, I figured it out. Python 2.7 refactored their httplib to use one class heirarchy instead of in 2.6 there were 2. | 20:16 |
* gholt runs off to chat with home contractors, back in a bit. | 20:16 | |
tr3buchet | annegentle: i'm changing the default behavior of a flag, is there some specific place that is documented | 20:16 |
uvirtbot | New bug: #781878 in nova "Error during report_driver_status(): host_state_interval" [Undecided,New] https://launchpad.net/bugs/781878 | 20:16 |
*** llang629_ has left #openstack | 20:17 | |
*** llang629 has quit IRC | 20:18 | |
kbringard | dprince: I'm seeing that as well | 20:18 |
kbringard | FYI | 20:18 |
keny | using --nouse_cow_images appears to help, since I'm getting a new error now =D http://pastebin.com/fAzsnDmh (googled but found nothing :/ ) | 20:19 |
*** CloudChris has joined #openstack | 20:21 | |
*** llang629 has joined #openstack | 20:21 | |
dprince | kbringard: roger that. Waiting for sandywalsh/sirp to weigh in. But I think it came in w/ one of the distributed scheduler merges. | 20:21 |
kbringard | makes sense | 20:22 |
*** llang629 has quit IRC | 20:25 | |
*** CloudChris has left #openstack | 20:29 | |
*** markwash has quit IRC | 20:30 | |
*** lorin1 has left #openstack | 20:32 | |
*** RJD22 is now known as RJD22|away | 20:35 | |
*** piken is now known as piken_afk | 20:35 | |
*** watcher has quit IRC | 20:36 | |
annegentle | tr3buchet: the flag docs are auto-generated from docstrings, but I also update them manually in openstack-manuals. Which flag and what's the change? | 20:37 |
*** photron_ has quit IRC | 20:37 | |
*** dprince has quit IRC | 20:39 | |
*** mgoldmann has quit IRC | 20:39 | |
*** boncos has quit IRC | 20:41 | |
*** markwash has joined #openstack | 20:43 | |
*** kbringard has quit IRC | 20:46 | |
*** kbringard has joined #openstack | 20:46 | |
*** mgoldmann has joined #openstack | 20:46 | |
*** katkee has quit IRC | 20:48 | |
notmyname | nelson: joearnold: just merged a swift change that puts each transaction id in a response header. should really help with debugging issues | 20:54 |
nelson | on the file coming back? | 20:56 |
*** Shentonfreude has quit IRC | 20:56 | |
nelson | yeah, will need to write a program that grovels through the log file given the headers. | 20:56 |
nelson | although it's uuid enough that grep alone should do it. | 20:56 |
notmyname | yup | 20:57 |
*** keny has quit IRC | 20:58 | |
notmyname | 2 advantages: 1) you can easily find the log entries associated with the request 2) if a response doesn't have an X-Trans-ID header, it's not from swift | 20:58 |
*** infinite-scale has joined #openstack | 20:59 | |
*** mgoldmann_ has joined #openstack | 21:01 | |
openstackjenkins | Project swift build #258: SUCCESS in 35 sec: http://jenkins.openstack.org/job/swift/258/ | 21:01 |
openstackjenkins | Tarmac: added transaction id header to every response | 21:01 |
*** AlexNeef has joined #openstack | 21:01 | |
tr3buchet | annegentle: the flag is vlan_interface and I've change the default from 'eth0' to None | 21:02 |
nelson | notmyname: yup | 21:02 |
*** mgoldmann has quit IRC | 21:03 | |
tr3buchet | annegentle: updated the docstring a bit | 21:03 |
*** katkee has joined #openstack | 21:07 | |
annegentle | tr3buchet: great, thanks. So when I install nova, and use VLAN, do I now also need to change my nova.conf and add --vlan_interface with my known interface value? | 21:09 |
*** lionel has quit IRC | 21:10 | |
*** llang629 has joined #openstack | 21:10 | |
tr3buchet | annegentle: there are two options. A) just as you've said. B) upon creating networks, you also have to pass in the bridge_interface to use | 21:10 |
*** RJD22|away is now known as RJD22 | 21:11 | |
*** katkee has quit IRC | 21:13 | |
*** lionel has joined #openstack | 21:14 | |
*** imsplitbit has quit IRC | 21:16 | |
*** j05h has quit IRC | 21:17 | |
annegentle | tr3buchet: ok, that'll affect some instructions too, I'll get on it for diablo docs | 21:18 |
annegentle | tr3buchet: thanks for the heads-up! | 21:18 |
kbringard | annegentle: do you know if there is a list of failure scenarios along with recovery options somewhere? | 21:18 |
kbringard | or recovery steps, maybe I should say | 21:18 |
annegentle | kbringard: hm, not that I know of... for Compute only? | 21:18 |
nelson | mwahahaha, fetching files again, phwew. | 21:19 |
kbringard | any and all | 21:19 |
annegentle | kbringard: sounds like a great topic - start writing in the wiki possibly? | 21:19 |
kbringard | annegentle: yessm, just wanted to make sure I wasn't duplicating efforts | 21:19 |
nelson | gholt: so this twitter works again: https://twitter.com/#!/russnelson/status/68050129105592320 | 21:19 |
notmyname | nelson: it's so magical! | 21:20 |
nelson | yeah, damnit, isn't it?? | 21:21 |
*** markwash1 has joined #openstack | 21:22 | |
tr3buchet | annegentle: sure no problem. we should probably also chat some about multi-nic soon | 21:23 |
*** j05h has joined #openstack | 21:24 | |
*** markwash has quit IRC | 21:25 | |
*** llang629 has quit IRC | 21:26 | |
*** llang629 has joined #openstack | 21:28 | |
*** llang629 has left #openstack | 21:28 | |
vishy | tr3buchet: have you had a chance to check out networking branches yet? | 21:29 |
*** rcc has quit IRC | 21:30 | |
*** brd_from_italy has quit IRC | 21:34 | |
*** kbringard has quit IRC | 21:40 | |
*** j05h has quit IRC | 21:41 | |
*** infinite-scale has quit IRC | 21:44 | |
*** grapex has left #openstack | 21:47 | |
uvirtbot | New bug: #781909 in swift "bufferedhttp should str header values to be Python 2.7 compatible" [Undecided,New] https://launchpad.net/bugs/781909 | 21:51 |
*** mcclurmc_ has quit IRC | 21:53 | |
*** clauden_ has quit IRC | 21:56 | |
*** mcclurmc_ has joined #openstack | 22:01 | |
*** galthaus has quit IRC | 22:03 | |
*** gondoi has quit IRC | 22:06 | |
*** dobber_ has quit IRC | 22:10 | |
*** AXiS_SharK has joined #openstack | 22:10 | |
AXiS_SharK | does anyone have a comparison between openstack and vmware's vcloud director? | 22:11 |
AXiS_SharK | I'm looking for something like … how openstack does multi-tenancy, organizational hierarchies, service catalog, chargeback, vApps, user experience, etc | 22:12 |
*** j05h has joined #openstack | 22:18 | |
AXiS_SharK | !list | 22:18 |
openstack | AXiS_SharK: Admin, Channel, ChannelLogger, Config, MeetBot, Misc, Owner, Services, and User | 22:18 |
*** patcoll has quit IRC | 22:21 | |
*** matiu has joined #openstack | 22:22 | |
*** rnirmal has quit IRC | 22:27 | |
*** jaypipes has quit IRC | 22:39 | |
*** agarwalla has quit IRC | 22:40 | |
*** lionel has quit IRC | 22:40 | |
*** pguth66 has quit IRC | 22:40 | |
*** Dweezahr has quit IRC | 22:40 | |
*** niksnut has quit IRC | 22:40 | |
*** Pathin has quit IRC | 22:40 | |
*** andy-hk has quit IRC | 22:40 | |
*** jamiec has quit IRC | 22:40 | |
*** aryan has quit IRC | 22:40 | |
*** arun has quit IRC | 22:40 | |
*** clayg has quit IRC | 22:40 | |
*** pquerna has quit IRC | 22:40 | |
*** dh has quit IRC | 22:40 | |
*** cdbs has quit IRC | 22:40 | |
*** romans has quit IRC | 22:40 | |
*** ctennis has joined #openstack | 22:42 | |
AlexNeef | who knows about keystone project? | 22:43 |
*** lionel has joined #openstack | 22:45 | |
*** pguth66 has joined #openstack | 22:45 | |
*** Dweezahr has joined #openstack | 22:45 | |
*** niksnut has joined #openstack | 22:45 | |
*** andy-hk has joined #openstack | 22:45 | |
*** jamiec has joined #openstack | 22:45 | |
*** aryan has joined #openstack | 22:45 | |
*** arun has joined #openstack | 22:45 | |
*** clayg has joined #openstack | 22:45 | |
*** pquerna has joined #openstack | 22:45 | |
*** dh has joined #openstack | 22:45 | |
*** cdbs has joined #openstack | 22:45 | |
*** romans has joined #openstack | 22:45 | |
*** Pathin has joined #openstack | 22:46 | |
*** Pathin_ has joined #openstack | 22:47 | |
gholt | AlexNeef: I think KnightHacker is the only one in channel. | 22:47 |
AlexNeef | knightHacker let me know if you have a minute. | 22:47 |
*** cp16net has quit IRC | 22:49 | |
*** openpercept_ has quit IRC | 22:53 | |
*** openpercept1 has joined #openstack | 22:55 | |
*** mgoldmann_ has quit IRC | 22:56 | |
*** derrick_ has joined #openstack | 22:58 | |
*** Dumfries has quit IRC | 23:01 | |
*** Eyk has quit IRC | 23:01 | |
*** Dumfries has joined #openstack | 23:01 | |
*** miclorb has joined #openstack | 23:05 | |
*** llang629 has joined #openstack | 23:09 | |
*** llang629 has left #openstack | 23:09 | |
*** dragondm has quit IRC | 23:11 | |
*** dysinger has joined #openstack | 23:16 | |
*** jguerrero__ has joined #openstack | 23:21 | |
joearnold | notmyname: Yes. That would be helpful. Is the intent to always turn on X-Trans-ID or just for debugging? | 23:21 |
*** tblamer has quit IRC | 23:25 | |
*** jkoelker has quit IRC | 23:29 | |
notmyname | joearnold: it's always on. there is no cost to it (a few extra bytes in headers, but no security issues) | 23:30 |
notmyname | joearnold: it's part of the catch_errors middleware | 23:30 |
notmyname | so therefore the intent is to always have it on | 23:31 |
joearnold | notmyname: Makes sense. I don't see the header size byte count being an issue. (We've already increased the token size by about that many bytes in one of our deployments. ) | 23:34 |
*** obino has quit IRC | 23:46 | |
*** mszilagyi has quit IRC | 23:48 | |
*** obino has joined #openstack | 23:48 | |
*** BK_man has quit IRC | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!