clayg | dfg: zaitcev: thanks for the pings - i'll look at those changes | 00:02 |
---|---|---|
zaitcev | clayg: I'm still thinking about splitting up, but the first chunk has to contain a ton of trampolines now. | 00:03 |
*** mkollaro has quit IRC | 00:10 | |
*** openstackgerrit has quit IRC | 00:19 | |
*** openstackgerrit has joined #openstack-swift | 00:20 | |
*** dmorita has joined #openstack-swift | 00:28 | |
* notmyname is out for a while | 00:28 | |
*** matsuhashi has joined #openstack-swift | 00:29 | |
*** openstackgerrit has quit IRC | 00:34 | |
*** openstackgerrit has joined #openstack-swift | 00:34 | |
*** shri has quit IRC | 00:47 | |
*** openstackgerrit has quit IRC | 00:49 | |
*** openstackgerrit has joined #openstack-swift | 00:49 | |
*** csd has quit IRC | 00:58 | |
portante | jogo: I'd been keen to help with that if interested | 01:01 |
*** zul has quit IRC | 01:03 | |
*** zul has joined #openstack-swift | 01:03 | |
jogo | notmyname: cool, well I wasn't inherintily volunteering to spin that lib out myself | 01:04 |
jogo | so if you want to do it instead that would be cool | 01:04 |
cihhan | notmyname: btw i have one more question if u r available: how is the connection between proxy and storage nodes? r they encrypted or completely raw? | 01:17 |
*** aurynn has joined #openstack-swift | 01:29 | |
*** haomaiwang has joined #openstack-swift | 01:30 | |
*** gyee has quit IRC | 01:32 | |
*** haomaiwang has quit IRC | 01:34 | |
*** hipster has joined #openstack-swift | 01:36 | |
*** zul has quit IRC | 01:54 | |
*** zul has joined #openstack-swift | 01:56 | |
*** madhuri has quit IRC | 01:57 | |
*** saschpe has quit IRC | 02:00 | |
*** saschpe has joined #openstack-swift | 02:01 | |
openstackgerrit | A change was merged to openstack/swift: taking the global reqs that we can https://review.openstack.org/94669 | 02:11 |
portante | cihhan: unless you have taken steps to explicitly use SSL, I believe proxy -> storage node is not using SSL in your setup | 02:14 |
portante | jogo: do you have a target first project as a guinea pig | 02:15 |
*** zul has quit IRC | 02:22 | |
jogo | portante: I was thinking nova | 02:23 |
*** hipster has quit IRC | 02:25 | |
*** tanee has quit IRC | 02:25 | |
*** tanee has joined #openstack-swift | 02:26 | |
*** kenhui has joined #openstack-swift | 02:43 | |
*** kenhui has quit IRC | 02:46 | |
*** patchbot has quit IRC | 02:55 | |
*** patchbot` has joined #openstack-swift | 02:55 | |
*** matsuhas_ has joined #openstack-swift | 02:55 | |
*** minnear_ has joined #openstack-swift | 02:56 | |
*** patchbot` is now known as patchbot | 02:56 | |
*** nosnos_ has joined #openstack-swift | 02:57 | |
*** fbo_away has joined #openstack-swift | 02:57 | |
*** mkerrin1 has joined #openstack-swift | 02:58 | |
*** ryao_ has joined #openstack-swift | 03:00 | |
*** wklely has joined #openstack-swift | 03:02 | |
*** fbo has quit IRC | 03:03 | |
*** minnear has quit IRC | 03:03 | |
*** matsuhashi has quit IRC | 03:03 | |
*** jeblair has quit IRC | 03:03 | |
*** fbo_away is now known as fbo | 03:03 | |
*** nosnos has quit IRC | 03:03 | |
*** mkerrin has quit IRC | 03:03 | |
*** wer has quit IRC | 03:03 | |
*** wkelly has quit IRC | 03:03 | |
*** russell_h has quit IRC | 03:03 | |
*** chalcedony has quit IRC | 03:03 | |
*** ryao has quit IRC | 03:03 | |
*** russell_h has joined #openstack-swift | 03:03 | |
*** russell_h has quit IRC | 03:03 | |
*** wer has joined #openstack-swift | 03:04 | |
*** russell_h has joined #openstack-swift | 03:04 | |
*** saschpe- has joined #openstack-swift | 03:05 | |
*** russell_h has quit IRC | 03:05 | |
*** russell_h has joined #openstack-swift | 03:05 | |
*** chalcedony has joined #openstack-swift | 03:06 | |
*** serverascode has quit IRC | 03:08 | |
*** saschpe has quit IRC | 03:09 | |
*** gholt has quit IRC | 03:09 | |
*** jeblair has joined #openstack-swift | 03:09 | |
*** serverascode has joined #openstack-swift | 03:10 | |
*** hipster has joined #openstack-swift | 03:10 | |
*** gholt has joined #openstack-swift | 03:14 | |
*** ChanServ sets mode: +v gholt | 03:14 | |
*** omame has quit IRC | 03:15 | |
*** aurynn has quit IRC | 03:22 | |
*** mrsnivvel has joined #openstack-swift | 03:23 | |
cihhan | portante, is there a way to use encrypted data transfer or is it too costly? | 03:49 |
*** nosnos_ has quit IRC | 03:52 | |
*** hipster has quit IRC | 03:55 | |
*** omame has joined #openstack-swift | 03:57 | |
*** john3213 has joined #openstack-swift | 04:20 | |
*** john3213 has left #openstack-swift | 04:25 | |
*** haomaiwang has joined #openstack-swift | 04:35 | |
*** nosnos has joined #openstack-swift | 04:35 | |
*** haomaiwang has quit IRC | 04:40 | |
*** krtaylor has joined #openstack-swift | 04:41 | |
*** erlon has quit IRC | 04:49 | |
*** igor_ has joined #openstack-swift | 04:59 | |
*** igor__ has quit IRC | 05:02 | |
*** ppai has joined #openstack-swift | 05:02 | |
*** omame has quit IRC | 05:05 | |
*** psharma has joined #openstack-swift | 05:21 | |
hugokuo | morning ... | 05:50 |
*** zaitcev has quit IRC | 06:10 | |
*** nshaikh has joined #openstack-swift | 06:12 | |
*** nmap911 has joined #openstack-swift | 06:13 | |
nmap911 | Hi all. When enabling ceilometer monitoring on swift, is there any specific reason it needs access to my ceilometer message queues? I though ceilometer only polls swift-proxys for deltas? | 06:14 |
cschwede_ | nmap911: Hi! No, the ceilometer middleware (https://github.com/openstack/ceilometer/blob/master/ceilometer/objectstore/swift_middleware.py) pushes data from Swift to Ceilometer | 06:25 |
nmap911 | cschwede_ : thanks for the link, I had to install the ceilometer-api package to get the right libs - do I then update the /etc/ceilometer/ceilometer.conf file with the details for the message queue? | 06:30 |
cschwede_ | nmap911: I think so (at least from looking at http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-notification-agent). The ceilometer middleware is developed by the ceilometer project, if modifying /etc/ceilometer/ceilometer.conf doesn’t work you might also ask on #openstack-ceilometer | 06:34 |
nmap911 | cool stuff. thanks allot for your help! | 06:36 |
cschwede_ | nmap911: you’re welcome, glad to help! | 06:37 |
*** sandywalsh has quit IRC | 06:42 | |
*** sandywalsh has joined #openstack-swift | 06:44 | |
*** saurabh_ has joined #openstack-swift | 06:48 | |
*** saurabh_ has joined #openstack-swift | 06:48 | |
*** sandywalsh has quit IRC | 06:56 | |
*** sandywalsh has joined #openstack-swift | 06:58 | |
*** ppai has quit IRC | 07:11 | |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 07:12 |
psharma | cschwede_, nmap911 , i trying to add ceilometer middleware in swift-icehouse dev setup , as middleware is not there in swift , i am getting errors , so my question is do i need to install all of ceilometer stuff on the swift node , just for its swift middleware , i am trying to keep swift and ceilometer on different VMs | 07:22 |
nmap911 | which errors are you receiving? | 07:26 |
nmap911 | I had to install 2 different pip libraries and the ceilometer-api package | 07:26 |
nmap911 | psharma: | 07:27 |
*** ppai has joined #openstack-swift | 07:29 | |
cschwede_ | psharma: yes, because the middleware uses different parts of the ceilometer package (https://github.com/openstack/ceilometer/blob/master/ceilometer/objectstore/swift_middleware.py#L60-L64) | 07:34 |
*** mkollaro has joined #openstack-swift | 07:39 | |
*** ppai has quit IRC | 07:41 | |
*** omame has joined #openstack-swift | 07:45 | |
*** nacim has joined #openstack-swift | 07:50 | |
*** ppai has joined #openstack-swift | 07:54 | |
*** foexle has joined #openstack-swift | 07:55 | |
*** mlipchuk has joined #openstack-swift | 08:03 | |
*** mlipchuk has quit IRC | 08:08 | |
*** mlipchuk has joined #openstack-swift | 08:23 | |
*** blazesurfer has joined #openstack-swift | 08:26 | |
blazesurfer | Hi All | 08:26 |
*** jamie_h has joined #openstack-swift | 08:26 | |
psharma | nmap911, can you name the pip libraries | 08:26 |
blazesurfer | Has any one experianced a container-replicator ERROR rsync failed with 10: ? | 08:28 |
hugokuo | blazesurfer: Please paste the log on pastebin. Thanks :) | 08:29 |
nmap911 | psharma: sure one sec | 08:33 |
blazesurfer | ok just with the error or some lines around as well? | 08:33 |
nmap911 | psharma: pecan==0.4.5 & happybase>=0.5, !=0.7 | 08:33 |
nmap911 | blazesurfer: some lines around it always helps | 08:34 |
psharma | ok , how are you installing ceilometer-api from source? | 08:34 |
blazesurfer | Ok http://pastebin.com/aE3YHbq3 hope this is the pastebin you are referring too. | 08:37 |
nmap911 | psharma: on ubuntu 14.04 i do it via the package manager (apt-get install) | 08:38 |
psharma | i m using fedora19 , can you suggest some steps | 08:39 |
nmap911 | easiest would be to git clone the repo, pip install -r on the requirements.txt file and then run python setup.py install | 08:41 |
blazesurfer | also i note that this channel is logged is there a location i can go read over what has been previously discussed? might save some silly questions:) | 08:42 |
*** haomaiwa_ has joined #openstack-swift | 08:42 | |
psharma | nmap911, i am getting this http://fpaste.org/104051/00747483/ | 08:43 |
nmap911 | psharma: you need the gcc compiler packages - on ubuntu its included in build-essential | 08:44 |
nmap911 | psharma: yum install gcc | 08:44 |
psharma | it it there | 08:44 |
psharma | gcc-4.8.2-7.fc19.x86_64 | 08:45 |
hugokuo | blazesurfer: Could you please show me the container-ring ? | 08:46 |
*** haomaiwa_ has quit IRC | 08:47 | |
nmap911 | psharma: looks good | 08:50 |
blazesurfer | hugokuo: http://pastebin.com/qmu5H4hQ | 08:51 |
hugokuo | blazesurfer: you want single replica of container DB ? | 08:56 |
blazesurfer | Hugokuo: sorry maybe i have miss understood.. i have a single replica yes i am planning to move to 3 replica solution in process of planning migration from this host to a 3 node cluster | 08:58 |
blazesurfer | Hugokuo: my aim is actually to get my disks to level out, i have added to new drives to this single host as it ran out of space, and it has not balanced out and this is the only reoccurring error i can see so wanted to fix this to see if it helped | 08:59 |
blazesurfer | am i barking up the wrong tree? | 09:00 |
hugokuo | blazesurfer: Rsync ERROR code : 10 Error in socket I/O | 09:03 |
*** h6w has quit IRC | 09:04 | |
hugokuo | The container replicator try to sync the .db to local's container server ...... But ERROR in socket I/O . (am thinking) | 09:05 |
hugokuo | blazesurfer: need more information about the rsync configuration. | 09:06 |
hugokuo | Seems all ERROR were on sdb1/sdc1. There're many possibilities now. 1) rsync setting 2) Disk access permission for container-replicator daemon on sdb1/sdc1 3) available disk space on sdb1/sdc1 ... | 09:10 |
blazesurfer | ok so disk space is issue on sdb1 but not sdbc1 | 09:12 |
hugokuo | blazesurfer: well... perhaps the mod of sdc1 mount point is not allowing daemon to operate | 09:13 |
blazesurfer | sorry i am wrong sdb1 and sdc1 is full | 09:13 |
blazesurfer | im trying to level to sdd1 sde1 | 09:13 |
hugokuo | blazesurfer: Perhaps to set the weight of sdb1 to 0 will help... | 09:14 |
blazesurfer | ok ill try that now. | 09:14 |
blazesurfer | with me having the 127 address can i change the ip to the machine ip as well so can expand the cluster out? | 09:14 |
hugokuo | blazesurfer: sure... but there's a limitation on the replica count now. It's not a dynamic value for now.... | 09:15 |
blazesurfer | oh ok so i can increase the replica count on this ring? can rebuild the ring with out loss of data ? | 09:16 |
hugokuo | in other word, it can't be changed in current swift implementation .... (it will soon) | 09:16 |
blazesurfer | to migrate for grizzly to icehouse as well am i better to build a new deployment and suck data accross? this way i can use a better desing | 09:17 |
blazesurfer | design. note only 4 tb of data or so in current | 09:17 |
*** ppai has quit IRC | 09:18 | |
hugokuo | blazesurfer: With 1 replica, you need to do it carefully. How will you suck data across old & new deploymenr ? | 09:20 |
blazesurfer | im curious to my options around that. is there a cloud sync utility. or would i be able to copy the files at the os level? | 09:26 |
hugokuo | blazesurfer: https://github.com/openstack/swift/blob/master/CHANGELOG#L439-L441 | 09:26 |
* hugokuo checking the Swift version of Grizzly | 09:26 | |
hugokuo | blazesurfer: how many containers do you have there ? | 09:28 |
hugokuo | blazesurfer: how many accounts in use within the cluster ? | 09:28 |
blazesurfer | ok 5 or 6 accounts | 09:28 |
blazesurfer | should be many more containers it was poc that grew before i got back to redesign | 09:29 |
hugokuo | blazesurfer: 1) There's no account level sync tools from a cluster to another now. 2) Container-sync feature may help but that's not a good idea in your case. 3) In case of your data migration, it's much more administrator's operation. | 09:31 |
hugokuo | blazesurfer: Are the object replica number been set to 0 as well in this cluster ? | 09:31 |
blazesurfer | ok so id have to login as each account | 09:32 |
blazesurfer | not yet | 09:32 |
blazesurfer | the weight you mean ? | 09:32 |
hugokuo | blazesurfer: not weight. I mean the replica counts . 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.77 balance | 09:32 |
hugokuo | It is 1 for container. Means only single copy of your container DB cross all your devices now. | 09:33 |
blazesurfer | account 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.77 balance | 09:33 |
hugokuo | blazesurfer: k, how about object ? | 09:33 |
blazesurfer | 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.77 balance | 09:34 |
hugokuo | blazesurfer: :( ....... can I have a question here ? Why would you like to have only 1 copy of each object by using Swift ??? | 09:35 |
hugokuo | Any specific reason ? | 09:35 |
acoles | cschwede_: did you get chance to look at the change to https://review.openstack.org/#/c/94347/ ? | 09:35 |
blazesurfer | thus building the new cluster with 3 replicas | 09:35 |
blazesurfer | originally was built as a pilot and the way we understood when we read about it was we can set to 1 replica (note only running one host and on enterprise grade san) | 09:36 |
*** ppai has joined #openstack-swift | 09:36 | |
blazesurfer | found something the other day that said when replicating or leveling storage the node or replica can be offline at that time so requests get answer by one of the other replica's hope that reads right. | 09:37 |
blazesurfer | its being used as a backup target for offsite replication to the cloud (private cloud) | 09:37 |
hugokuo | blazesurfer: got it. If you want to keep existing data in new cluster. The best way is to join new nodes into this cluster now. | 09:38 |
blazesurfer | ok | 09:38 |
blazesurfer | can i join icehouse nodes into this cluster? | 09:38 |
hugokuo | blazesurfer: Yes... | 09:38 |
blazesurfer | ok then i can migrate to another proxy node after synced to the new nodes i presume | 09:39 |
hugokuo | blazesurfer: Steps - 1) join new node and devices to the ring 2) Modify the IP of 127.0.0.1 to the cluster-facing IP. 3) Set the replica to 3 for account/container/object rings 4) rebalance rings and distribute to all nodes. 5) Wait until all data been replicated to new nodes. | 09:41 |
hugokuo | blazesurfer: correct ... Just keep using the same rings... | 09:42 |
blazesurfer | is there an easy way to tell when data has been replicated? | 09:42 |
blazesurfer | id like to remove this node eventually. had an issue with the enterprise san its on thus the new build and expand accross multiple | 09:42 |
hugokuo | blazesurfer: yes... swift-recon or observing the log file by grep the following pattern : $> grep object-replicator $log_file | grep remain | 09:43 |
blazesurfer | if i had rings on the existing hosts i have built can i delete them and move the new ones accross? ill need to delete data on the drives as well? | 09:43 |
blazesurfer | ok so need to make sure rsync is configured as well for it to replicate | 09:44 |
blazesurfer | cool | 09:44 |
hugokuo | blazesurfer: You must using the same rings on the cluster or proxy will not able to find the right partition number of any data. | 09:47 |
hugokuo | That's very very critical..... | 09:47 |
blazesurfer | yep so ill clean my new host by removing rings and data from it, and then update the ring files on the existing cluster with correct ip and add the second zone (node) to that ring set, copy them to new host | 09:48 |
hugokuo | blazesurfer: perfect ... | 09:48 |
blazesurfer | i just need to work out how to change replica | 09:48 |
blazesurfer | count | 09:48 |
hugokuo | check the help of swift-ring-builder | 09:48 |
blazesurfer | in case of losing replica rings can you rebuild them? | 09:49 |
hugokuo | swift-ring-builder <builder_file> set_replicas <replicas> | 09:49 |
blazesurfer | thank you :) | 09:49 |
blazesurfer | i do ring work, then rebalance then distibute? then restart services yes | 09:51 |
hugokuo | blazesurfer: never tried to rebuild rings before .... but I have blur memory about the answer is "Yes" if you do know the swift_hash_path_suffix = bad54b74-6101-4358-99fc-fe1a551e7fd in /etc/swift.conf | 09:51 |
blazesurfer | yep thats what i had read as well. | 09:51 |
blazesurfer | i thiink you had to know the original partition sizing as well but that is where i get blury-- google be my friend if i need to i guess | 09:52 |
hugokuo | blazesurfer: I do not guarantee it. :) | 09:52 |
blazesurfer | guarntee what part | 09:52 |
hugokuo | blazesurfer: rebuild 100% rings .... | 09:53 |
blazesurfer | i apprechiate you taking the time to let me bounce the though process off you and answering my question | 09:53 |
hugokuo | It can be test easily by producing fake ring on your laptop :) | 09:53 |
blazesurfer | sorry about the spelling, bad at the best of times but getting sleepy | 09:53 |
hugokuo | blazesurfer: nite .... | 09:53 |
blazesurfer | yep 8pm so not tolate | 09:54 |
blazesurfer | but havent sleeped well with issues happening with storage and swift last week or so | 09:54 |
hugokuo | blazesurfer: Don't worry about that. It's not too hard to figure out the problem :) | 09:55 |
blazesurfer | yer swift looks very solid actually once you understand it | 09:56 |
hugokuo | blazesurfer: Those operations could be done by 2 hours( I guess) | 09:56 |
hugokuo | blazesurfer: Yes, it is. Do you know what's the best part of Swift? Simple ...... | 09:56 |
hugokuo | I mean comparing to some other fancy solution. It's much more simple in it's logic. | 09:57 |
blazesurfer | it does appear simple. i sort of understand the rings to some extent. given time its becoming more simple:) was alittle bit of a differant way of thought but very logical. | 09:57 |
blazesurfer | yer | 09:57 |
blazesurfer | hugokuowhat time is it where you are? | 09:58 |
hugokuo | blazesurfer: It's about 6pm in Taipei, Taiwan. A small country under Japan. :) | 09:59 |
blazesurfer | cool so not to much time differance | 09:59 |
hugokuo | blazesurfer: yup... are you located in German ? | 10:00 |
blazesurfer | Australia | 10:00 |
hugokuo | blazesurfer: aha... good place | 10:00 |
blazesurfer | not bad i like it :) | 10:01 |
blazesurfer | im using tempauth with swift to keep it simple. you dont know of a gui for calculating space used by accounts do you | 10:01 |
hugokuo | blazesurfer: I got to keep my works here. see you then.... | 10:01 |
blazesurfer | Hugokuo: no worries thank you for your help have a good night | 10:02 |
*** eglynn has joined #openstack-swift | 10:02 | |
*** mkollaro has quit IRC | 10:05 | |
*** BAKfr has joined #openstack-swift | 10:20 | |
*** sagar has joined #openstack-swift | 10:23 | |
*** sagar is now known as Guest27087 | 10:23 | |
*** sagar_ has joined #openstack-swift | 10:28 | |
sagar_ | Hello | 10:28 |
sagar_ | I have been following instructions for multiple server swift installation (http://docs.openstack.org/developer/swift/howto_installmultinode.html) | 10:29 |
sagar_ | But I am unable to create regions (above zone) that has been implemented since swift 1.9 | 10:30 |
sagar_ | does anybody have any idea? | 10:30 |
*** Midnightmyth has joined #openstack-swift | 10:35 | |
ctennis | what's not working sagar_? | 10:38 |
blazesurfer | sagar_:what is the error you are getting i have played with regions yet though on my todo list once i fix my cluster | 10:38 |
ctennis | you just specify the region in front of the zone in the swift-ring-builder commands | 10:38 |
sagar_ | yes, that's what I had done | 10:38 |
*** erlon has joined #openstack-swift | 10:38 | |
blazesurfer | swift-ri\ | 10:39 |
sagar_ | but the command is not accepted | 10:39 |
blazesurfer | sorry typo wrong screen | 10:39 |
ctennis | are you using a swift > 1.9 ? | 10:39 |
ctennis | what syntax did you use? | 10:39 |
sagar_ | I want to use swift > 1.9 (1.13 even) | 10:40 |
sagar_ | but when I run swift --version | 10:40 |
sagar_ | it sows me 1.0 | 10:40 |
sagar_ | I used this syntax | 10:40 |
ctennis | ok, then you need a newer version of swift | 10:40 |
sagar_ | swift-ring-builder <builder_file> add r<region>z<zone>-<ip>:<port>/<device_name>_<meta> <weight> | 10:40 |
ctennis | yes, that syntax looks right | 10:41 |
sagar_ | but the only syntax listed in man page of swift-ring builder is | 10:41 |
sagar_ | swift-ring-builder <builder_file> add z<zone>-<ip>:<port>/<device_name>_<meta> <weight> | 10:41 |
ctennis | "swift" itself is part of a different package, a python utility called "swift-pythonclient" | 10:41 |
sagar_ | I want to install a newer version, I just followed the instruction given the swift multinode set up doc on Ubuntu 12.04 | 10:42 |
sagar_ | but I get swift 1.0 | 10:42 |
sagar_ | How should I install the newest version? | 10:42 |
ctennis | run swift-ring-builder by itself, and the top of the output, what version of that is it? | 10:43 |
ctennis | and does it reference regions in the help output? | 10:43 |
sagar_ | 1.3 | 10:44 |
sagar_ | no, it doesn't | 10:44 |
*** haomaiwang has joined #openstack-swift | 10:45 | |
ctennis | can you paste the output you're seeing somewhere, it's not obvious to me why it wouldn't work | 10:49 |
sagar_ | output of what? | 10:49 |
sagar_ | swift-ring-builder command? | 10:49 |
ctennis | What we need to find out is what version of the swift ubuntu package you have installed | 10:49 |
ctennis | yeah | 10:49 |
ctennis | I'm just not sure the actual packge name offhand | 10:50 |
ctennis | maybe "swift-proxy" | 10:50 |
blazesurfer | hmm i just did that command on my icehouse install and my grizzly same swift-ring-builder 1.3 is what returns on the first line | 10:50 |
sagar_ | Is it fine if I paste the output here? | 10:50 |
ctennis | "dpkg -s swift-proxy" | 10:50 |
ctennis | use something like paste.openstack.org or gist.github.com | 10:51 |
sagar_ | okay, here is the output of the swift-ring-builder | 10:53 |
sagar_ | http://paste.openstack.org/show/81136/ | 10:53 |
sagar_ | this is for "dpkg -s swift-proxy" | 10:55 |
sagar_ | http://paste.openstack.org/show/81137/ | 10:55 |
ctennis | got it | 10:56 |
ctennis | so you have swift 1.4 installed | 10:56 |
ctennis | which makes sense why regions aren't working | 10:56 |
ctennis | it looks like that's the latest version packaged by Canonical for ubuntu precise | 10:56 |
sagar_ | how should I then install swift's newest version? Is it possible from the source? | 10:57 |
hugokuo | sagar_: upgrade it with new cloudarchive repo.... | 10:57 |
ctennis | hugokuo probably knows more than me, you can use ubuntu trust (14.04) which has the newer version available, or this link (http://docs.openstack.org/developer/swift/development_saio.html) has some info on installing from sources vs. package | 10:58 |
hugokuo | sagar_: https://wiki.ubuntu.com/ServerTeam/CloudArchive Enable Icehouse | 10:58 |
hugokuo | sagar_: You are on Ubuntu right ??? The CloudArchive is the easiest way :) | 10:59 |
sagar_ | okay, thanks. I will try this. | 10:59 |
hugokuo | ctennis: good morning ... bro ~~~~~~~~ | 11:00 |
ctennis | :) | 11:00 |
*** haomaiwang has quit IRC | 11:01 | |
sagar_ | Thanks to both of you. Now I have version 1.13 :) | 11:10 |
blazesurfer | Question for thought. is there away to control the number of replicas of data or copies of data at the account level as well. ie if i have a users that wants their data in 3 locations Zones and a user that doesnt really want 3 but 1 location (zone) | 11:15 |
*** zul has joined #openstack-swift | 11:18 | |
*** miqui has quit IRC | 11:20 | |
*** matsuhas_ has quit IRC | 11:27 | |
blazesurfer | ok im gonna get some sleep thanks you Hugoko once again for your assistance. | 11:32 |
*** sagar_ has quit IRC | 11:33 | |
*** praveenkumar has quit IRC | 11:38 | |
*** praveenkumar has joined #openstack-swift | 11:47 | |
*** Guest27087 has quit IRC | 11:48 | |
*** mkollaro has joined #openstack-swift | 12:01 | |
*** ppai has quit IRC | 12:02 | |
*** ppai has joined #openstack-swift | 12:02 | |
*** nacim has quit IRC | 12:07 | |
*** PradeepChandani has quit IRC | 12:11 | |
*** acoles is now known as acoles_away | 12:16 | |
*** Midnightmyth has quit IRC | 12:17 | |
*** nacim has joined #openstack-swift | 12:23 | |
*** nshaikh has quit IRC | 12:26 | |
*** krtaylor has left #openstack-swift | 12:27 | |
cschwede_ | acoles: yes, thanks a lot for the unittest - works as expected and coverage marks the change in swiftclient/client.py as tested. thanks! | 12:28 |
*** nosnos has quit IRC | 12:31 | |
*** dmorita has quit IRC | 12:34 | |
*** acoles_away is now known as acoles | 12:37 | |
*** lpabon has joined #openstack-swift | 12:51 | |
*** wklely is now known as wkelly | 12:55 | |
*** nshaikh has joined #openstack-swift | 12:56 | |
*** miqui has joined #openstack-swift | 12:57 | |
*** chuck__ has joined #openstack-swift | 12:58 | |
*** Rikkol has joined #openstack-swift | 13:06 | |
*** nacim has quit IRC | 13:11 | |
*** Rikkol has left #openstack-swift | 13:11 | |
*** lpabon has quit IRC | 13:24 | |
*** r-daneel has joined #openstack-swift | 13:38 | |
*** praveenkumar has quit IRC | 13:38 | |
*** nacim has joined #openstack-swift | 13:41 | |
*** gustavo has joined #openstack-swift | 13:47 | |
*** nshaikh has quit IRC | 13:54 | |
*** praveenkumar has joined #openstack-swift | 13:57 | |
*** ppai has quit IRC | 13:58 | |
*** elambert has quit IRC | 13:58 | |
*** Trixboxer has joined #openstack-swift | 14:00 | |
*** haomaiwang has joined #openstack-swift | 14:02 | |
*** psharma has quit IRC | 14:02 | |
*** jamie_h has quit IRC | 14:11 | |
*** jamie_h has joined #openstack-swift | 14:12 | |
*** byeager has joined #openstack-swift | 14:14 | |
notmyname | good morning | 14:17 |
*** csd has joined #openstack-swift | 14:21 | |
hugokuo | morning | 14:23 |
*** csd has quit IRC | 14:31 | |
byeager | Good Morning! | 14:33 |
*** haomaiwang has quit IRC | 14:34 | |
notmyname | byeager: I read an article a few days ago on how hadoop/MapReduce has had so many problems in HPC. did you see it? | 14:40 |
notmyname | byeager: made me think of the zeroVM GTM and what y'all are trying to do | 14:40 |
notmyname | byeager: http://glennklockwood.blogspot.com/2014/05/hadoops-uncomfortable-fit-in-hpc.html | 14:41 |
byeager | I had not seen that article, thanks! | 14:42 |
notmyname | byeager: did you see what's going on with storage policies? I want to make sure you know so you aren't caught unawares | 14:43 |
notmyname | byeager: rough draft of what's I'll send out to various mailing lists, but it has the basic plan: https://gist.githubusercontent.com/notmyname/7521817bd1027adc35a7/raw/609164665ec6c9ccdb0ee90a69f045df4081ca0a/gistfile1.txt | 14:44 |
byeager | notmyname: Is there new information since Thursday of last week, that is the last real update I had seen. | 14:44 |
notmyname | byeager: no. just working on that plan now :-) | 14:45 |
byeager | Perfect, I will take a look at the link. | 14:45 |
gholt | Woah, wait, byeager in channel? (Of course, what am I talking about, I'm seldom really here) | 14:54 |
byeager | gholt: you have to watch what you say about me now ;) | 14:56 |
gholt | Yeah... Heheh | 14:56 |
*** mlipchuk has quit IRC | 15:03 | |
*** ryao_ has quit IRC | 15:03 | |
*** ryao_ has joined #openstack-swift | 15:03 | |
*** ryao_ is now known as ryao | 15:03 | |
* creiht sighs | 15:05 | |
*** igor_ has quit IRC | 15:07 | |
creiht | maybe it would be better for me to just unsubscribe to openstack-dev | 15:07 |
*** igor has joined #openstack-swift | 15:07 | |
notmyname | creiht: what? you don't what to have standardized variable names? | 15:08 |
creiht | lol | 15:08 |
creiht | you guess well :) | 15:09 |
*** kevinc_ has joined #openstack-swift | 15:09 | |
notmyname | ya, I read that, signed, and decided to ignore it. you should do the same. if someone actually submits a patch, then we'll say "no". if they don't then it doesn't waste anyone's time | 15:09 |
notmyname | :-) | 15:09 |
creiht | heh yeah | 15:10 |
openstackgerrit | John Dickinson proposed a change to openstack/swift: Add Storage Policy Documentation https://review.openstack.org/85824 | 15:11 |
*** igor has quit IRC | 15:12 | |
notmyname | peluse_: (I know you're on vacation) ^^ I fixed line endings and two tiny formatting typos. I'll go ahead and merge it to feature/ec. | 15:13 |
notmyname | my oldest has a kindergarten graduation this morning. I'll be back online later today | 15:17 |
gholt | Just saw the StackStack party photos. Man, I am starting to look old. ;) Was a fun party though, and after party. | 15:17 |
creiht | hehe | 15:18 |
portante | you are young at heart, gholt | 15:18 |
portante | that is all that matters | 15:18 |
portante | sort of | 15:18 |
portante | ;) | 15:18 |
openstackgerrit | A change was merged to openstack/python-swiftclient: Fix Python3 bugs https://review.openstack.org/94347 | 15:24 |
cschwede_ | notmyname: i think python-swiftclient is now ready for python3 ^^ | 15:26 |
*** igor_ has joined #openstack-swift | 15:28 | |
creiht | \o/ | 15:30 |
*** chuck__ has quit IRC | 15:32 | |
*** igor_ has quit IRC | 15:32 | |
*** igor_ has joined #openstack-swift | 15:38 | |
dmsimard | woot. | 15:38 |
*** jamie_h has quit IRC | 15:38 | |
*** igor__ has joined #openstack-swift | 15:40 | |
*** igor_ has quit IRC | 15:43 | |
*** jamie_h has joined #openstack-swift | 15:43 | |
*** igor__ has quit IRC | 15:44 | |
*** kevinc_ has quit IRC | 15:45 | |
*** kevinc_ has joined #openstack-swift | 15:50 | |
*** pberis has joined #openstack-swift | 16:00 | |
*** BAKfr has quit IRC | 16:04 | |
*** byeager has quit IRC | 16:08 | |
*** mwstorer has joined #openstack-swift | 16:08 | |
openstackgerrit | Alistair Coles proposed a change to openstack/python-swiftclient: Fix wrong assertions in unit tests https://review.openstack.org/94920 | 16:09 |
*** byeager has joined #openstack-swift | 16:11 | |
*** gyee has joined #openstack-swift | 16:17 | |
*** kenhui has joined #openstack-swift | 16:28 | |
*** byeager has quit IRC | 16:28 | |
*** byeager has joined #openstack-swift | 16:29 | |
*** zaitcev has joined #openstack-swift | 16:32 | |
*** ChanServ sets mode: +v zaitcev | 16:32 | |
clayg | gholt: you look "distinguished" like a principle engineer - or a "fellow" even. | 16:34 |
*** elambert has joined #openstack-swift | 16:34 | |
clayg | gholt: creiht: who's byeager? is that nick? | 16:34 |
gholt | Blake Yeager, ZeroVM | 16:34 |
clayg | oh oh oh - byeager - hi blake! | 16:34 |
clayg | some how I didn't parse as b yeager :P | 16:35 |
clayg | the "bye" part bound too hard in my head - oops | 16:35 |
gholt | Heheh | 16:35 |
clayg | dfg: so i'm testing the xlo auth bug with keystone - and by default it's not exactly the same problem | 16:36 |
byeager | clayg: lol | 16:36 |
zaitcev | now that you said it I cannot unsee it | 16:36 |
clayg | at least I can't replicate the same way because the signed tokes can't be invalidated just by restarting memcache - yay tempauth! | 16:36 |
clayg | i guess I'll need to find someway to lower the default ttl on a token and then time my request.... juuuuuuuuuust right | 16:37 |
clayg | maybe if i stick out my tongue | 16:37 |
zaitcev | don't bite it if it works | 16:37 |
*** nottrobin is now known as cHilDPROdigY1337 | 16:38 | |
*** nacim has quit IRC | 16:39 | |
*** jergerber has joined #openstack-swift | 16:39 | |
*** igor__ has joined #openstack-swift | 16:40 | |
*** cHilDPROdigY1337 is now known as nottrobin | 16:41 | |
dfg | clayg: ok cool- i was just trying it out. but now i can stop :p | 16:45 |
*** igor__ has quit IRC | 16:45 | |
dfg | clayg: you don't think that relying on the auth middleware to act like how tempauth/swath (the gholt school of auth middleware) work is a little risky? | 16:46 |
dfg | clayg: from looking at keystone code it looks like if there is no token in headers and the delay_auth_decision isn't set (which is would be right?) then it raises a raise InvalidUserToken('Unable to find token in headers') | 16:54 |
dfg | that was supposed to say (which it wouldn't be right?) | 16:55 |
dfg | you'd think i'd get better at typing but i just get worse... | 16:55 |
*** shri has joined #openstack-swift | 16:57 | |
clayg | dfg: well, first of all my patch doesn't work with keystone - so the make_pre_authed request thing has that going for it | 16:58 |
clayg | dfg: but that's because of all the crazy stuff that authtoken does to "clean" the environment because all the stuff get's passed through as headers instead of wsgi environ because like... idk reverse proxy or someting | 16:59 |
clayg | anyway i just feel like the general solution requires auth change either way so it's reasonable to spend a bit of time looking at what's going to make the most sense in keystone - maybe the special cache flag for caching the get_groups/remote_user looking business will work out - i'm still poking at it | 17:00 |
dfg | isn't there somebody who actually uses keystone in here? why don't we ask them. cause i sure don't know what i'm talking about... | 17:00 |
clayg | i use keystone more or less | 17:01 |
clayg | well... more less than more I suppose | 17:01 |
dfg | clayg: i agree with that last thing you said. that patch i put was mostly like a: this will work for now- here's how auth can adapt to it- hopefully. but there's no reason (except for the whole production thing) that we can't involve them at this point | 17:02 |
clayg | dfg: well did you guys go ahead and push something out or are you holding out for this patch to land? | 17:02 |
dfg | cause this is a bug that happens to us on a daily basis. sucks when you're hours into downloading a big file and you get cut off because your token expired | 17:03 |
dfg | (whcih is how we found this bug) | 17:03 |
clayg | oh man :\ | 17:03 |
clayg | is the token ttl still 24 hours over there? | 17:03 |
dfg | ya | 17:03 |
clayg | i guess with enough requests it's gunna happen | 17:03 |
clayg | well fuuu | 17:04 |
*** eglynn has quit IRC | 17:07 | |
clayg | dfg: so anyway delay_auth_decision has to be set to true for acl's to work - so even though it's not the default, I think that's how you configure the auth_token middleware for swift (that's what devstack does anyway) | 17:07 |
dfg | oh- thats a conf thing. i thought it was that env varible | 17:09 |
clayg | the real problem is the _remove_auth_headers which is striping out all the stuff you might want to cache about authn | 17:09 |
dfg | whoa- ya i see that now. that's not going to help :) but its a perfect place to add my little env variable to not do that if its set :) | 17:11 |
dfg | but it probably is against some auth doctrine or something | 17:12 |
openstackgerrit | A change was merged to openstack/swift: Add Storage Policy Documentation https://review.openstack.org/85824 | 17:15 |
dfg | clayg: its almost like they didn't think about having a request bounce back and forth up and down the pipeline a million times. talk about lack of foresight :p | 17:17 |
*** byeager has quit IRC | 17:18 | |
clayg | lol | 17:19 |
*** kenhui has quit IRC | 17:19 | |
*** acoles is now known as acoles_away | 17:22 | |
*** eglynn has joined #openstack-swift | 17:39 | |
notmyname | /back | 17:41 |
*** cds has joined #openstack-swift | 17:41 | |
*** igor_ has joined #openstack-swift | 17:42 | |
notmyname | clayg: I marked the docs patch as approved for the feature/ec branch | 17:42 |
notmyname | clayg: what else are you needing from me/us for getting the patch chain ready? | 17:42 |
*** kevinc_ has quit IRC | 17:43 | |
*** igor_ has quit IRC | 17:46 | |
*** byeager has joined #openstack-swift | 17:49 | |
*** gyee has quit IRC | 17:53 | |
*** kevinc_ has joined #openstack-swift | 18:00 | |
clayg | dfg: ok, so the cache thing doesn't really work if common.wsgi.make_env that's used by make_subrequest doesn't copy all the environ keys you need to cache identity - which for keystone's keys - it does not | 18:00 |
dfg | ah | 18:01 |
clayg | but we *can* fix that - but I think we're exposing a leak in the abstraction :\ | 18:01 |
dfg | i hate those make_env functions. sam had it before where it would copy everything voer and we told him to not do that because of some reason. but that would have fixed this | 18:02 |
clayg | dfg: so... fuck it? swift was a bad idea anyway? | 18:03 |
*** byeager has quit IRC | 18:03 | |
dfg | haha | 18:04 |
clayg | dfg: unrelated - are you going to be in CO for the hack-a-thon | 18:04 |
dfg | no- we're going on vacation. | 18:05 |
clayg | damnit | 18:05 |
dfg | ya- how is it going to function without me there? :p | 18:05 |
clayg | i often find myself thinking "man... that dfg sure is a baddass - I should buy him some whiskey" | 18:05 |
dfg | now you're talkin my language | 18:06 |
creiht | lol | 18:06 |
clayg | but i'm not going mail it to you - you can't avoid me forever | 18:06 |
dfg | ya- i would have gone. the one in austin worked out pretty well i thought | 18:06 |
clayg | ok, sorry i'm getting all down in the weeds on the xlo auth thing - but I'm going to need to take this new insight and stew on it a little longer | 18:07 |
clayg | maybe I can just give up on my fear of make_pre_authed request | 18:07 |
dfg | ya- its a def pain in ass. | 18:08 |
notmyname | gholt: do you want me to respond to that email or just FYI until he pops up again? | 18:11 |
*** jamie_h has quit IRC | 18:12 | |
gholt | notmyname: Completely FYI, just so you had a bit to go on if he pops up again. | 18:16 |
notmyname | gholt: ok, cool. | 18:16 |
*** byeager has joined #openstack-swift | 18:17 | |
portante | are you folks talking about the "Uniform name for logger in projects" email? | 18:17 |
notmyname | portante: no. someone had been emailing gholt about a deployment question, and he bcc'd me on it | 18:17 |
*** byeager has quit IRC | 18:18 | |
portante | ah, k | 18:18 |
notmyname | portante: we alread gripped about that other email this morning :-) | 18:18 |
portante | ah, I missed it! | 18:18 |
notmyname | portante: IMO it's a distraction and should be ignored. I don't think you (or anyone else contributing to swift) should be spending any time on it | 18:19 |
*** byeager has joined #openstack-swift | 18:19 | |
portante | sure | 18:19 |
portante | I'll spend time on it if you ask! :) | 18:20 |
portante | that would be nice easy thoughtless work | 18:20 |
notmyname | portante: you need to be spending time on the thoughtful work that is actually useful to deployers and users :-) | 18:21 |
portante | just tell me what to do then! | 18:21 |
portante | besides jump in a lake | 18:22 |
*** kevinc_ has quit IRC | 18:28 | |
*** byeager_ has joined #openstack-swift | 18:32 | |
*** byeager has quit IRC | 18:33 | |
notmyname | portante: if you're looking for stuff beyond reviews, there were a few interesting ideas out of the summit: affinity on replication (and container listing updates), "parent id" on logs, /healthcheck?deep=true, swift-recon ring validator | 18:33 |
notmyname | all more important than `s/logger/LOG/` | 18:34 |
*** gustavo has quit IRC | 18:35 | |
clayg | dfg: the rabbit hole just keeps getting deeper, some of the stuff that it seems a pre-authn'd request would need to cache is acctully seemingly not being perserved in the wsgi environ outside of headers (which always get stripped passing through the keystoneclient.authtoken thing) | 18:35 |
clayg | chmouel: acoles_away: why on *earth* doesn't _integral_keystone_identity just return environ['keystone.identity'] | 18:38 |
clayg | oh god, i guess that key is missing user_id because... "backwards compat"? | 18:40 |
*** eglynn has quit IRC | 18:40 | |
*** igor_ has joined #openstack-swift | 18:42 | |
zaitcev | portante: I'm drowning in TODOs here too. 1) Delete the current /info auth and apply normal auth while we still can 2) bz#1083039 - double-logging, 3) pick https://review.openstack.org/77812 from Clay, 4) PyECLib packaging in Fedora, 5) do something about xattr.so and xattr>=4.0: create a fake egg (re 1020449) | 18:46 |
*** igor_ has quit IRC | 18:47 | |
zaitcev | portante: Also... 6) FIXME in Swift - left out by Portante [-- that one apparently inspired by swift/obj/mem_server.py] | 18:48 |
zaitcev | test/unit/obj/test_diskfile.py: # FIXME - yes, this an icky way to get code coverage ... worth | 18:48 |
zaitcev | swift/common/middleware/x_profile/profile_model.py: # FIXME: eventlet profiler don't provide full list of | 18:48 |
dfg | clayg: what a drag. i'll try to talk to some keystone devs at RAX about it. | 19:00 |
dfg | we haven't iritated them recently with an openstack-dev email thread have we? ok good. | 19:01 |
*** lpabon has joined #openstack-swift | 19:03 | |
portante | notmyname: do you have those captured in a wiki page or something? | 19:08 |
notmyname | portante: gleaned from the ehterpads last week. notes in my own evernote. briefly talked about in yesterday's meeting | 19:08 |
portante | zaitcev: I feel your pain, what is #6 about? | 19:09 |
notmyname | portante: I was hoping to get some of them recorded as specs when that repo gets all set up | 19:09 |
zaitcev | portante: I just saw some left over in some patches | 19:09 |
portante | zaitcev: if you worked for the NFL, everything would be fine | 19:09 |
Dieterbe | hey notmyname , have you seen https://vimeo.com/95076197 yet? after about 11min i show a bunch of examples of generating swift metrics dashboards dynamically | 19:13 |
notmyname | Dieterbe: I have not. | 19:13 |
*** kevinc___ has joined #openstack-swift | 19:14 | |
notmyname | thanks for the link | 19:14 |
notmyname | Dieterbe: are you on twitter? (ie for when I tweet this) | 19:15 |
Dieterbe | notmyname: https://twitter.com/Dieter_be | 19:18 |
notmyname | Dieterbe: thanks | 19:18 |
notmyname | Dieterbe: did you really just say "metrics 2.0"? ;-) | 19:19 |
*** eglynn has joined #openstack-swift | 19:20 | |
notmyname | oh, it's actually a thing http://metrics20.org | 19:22 |
Dieterbe | yeah it's actually what the presentation is about | 19:23 |
Dieterbe | and then i demo some examples of how to leverage it, but a lot of it is swift related, so that's why i shared it with you | 19:23 |
*** eglynn has quit IRC | 19:32 | |
notmyname | Dieterbe: pretty cool. thanks for sharing | 19:34 |
*** serverascode has quit IRC | 19:35 | |
notmyname | Dieterbe: is this (metrics 2.0, ie structured metrics) something that we should be looking at in swift? would this be a new adaptor that emits these sort of metrics natively? | 19:36 |
*** serverascode has joined #openstack-swift | 19:37 | |
Dieterbe | notmyname: well, right now, I'm not aware of anyone/anything else adopting it. so sure you could try, but a bunch of people might consider it too exotic at this point. | 19:38 |
Dieterbe | notmyname: basically it's still statsd metrics, but in a different format | 19:39 |
Dieterbe | a format that might look weird to a bunch of people | 19:39 |
Dieterbe | also, there's https://github.com/vimeo/graph-explorer/blob/master/graph_explorer/structured_metrics/plugins/openstack_swift.py which upgrades some of the existing statsd metrics to their 2.0 counterpart | 19:40 |
*** eglynn has joined #openstack-swift | 19:43 | |
*** igor_ has joined #openstack-swift | 19:43 | |
openstackgerrit | gholt proposed a change to openstack/swift: New log_max_line_length option. https://review.openstack.org/94991 | 19:43 |
gholt | creiht: notmyname: https://review.openstack.org/#/c/94991/ | 19:45 |
gholt | ^ That's the bug that hits us in at least one of our production clusters, silently making some object-replicators go brain dead. | 19:45 |
*** eglynn has quit IRC | 19:47 | |
*** igor_ has quit IRC | 19:47 | |
creiht | gholt: cool, I'll take a look | 19:49 |
notmyname | gholt: lgtm, but needs docs | 19:51 |
notmyname | Dieterbe: was the question at the end of that talk about ceilometer stuff? | 19:52 |
notmyname | Dieterbe: from the video I got "something, something, openstack maybe does this" | 19:53 |
*** gyee has joined #openstack-swift | 19:55 | |
Dieterbe | notmyname: ah yes, someone referred to some kind of metric naming specification in openstack/ceilometer | 19:55 |
Dieterbe | notmyname: but i looked for it, and couldn't really find anything similar to metrics 2.0 | 19:55 |
notmyname | Dieterbe: ok. and "metrics 2.0" seems a little more than just a naming specification | 19:55 |
notmyname | ie it's more of a structured data packed rather than just a key/value pair with an interesting name | 19:56 |
notmyname | right? | 19:56 |
notmyname | *packet | 19:56 |
gholt | notmyname: Oh yeah, forgot docs. Can I just put it at the end of the deployment_guide.rst under the Logging Considerations section? Putting it everywhere it could be is getting a bit cumbersome, both to keep up to date and to read. | 19:56 |
notmyname | gholt: I'd like it at least to be on the deployment guide and on the logs page. | 19:57 |
gholt | There's a logs page? Heh | 19:58 |
Dieterbe | notmyname: it's 2 sets of key-value pairs. 1 set to identify the metric, the other is metadata that can change without changing the metric identity | 19:58 |
notmyname | gholt: ya, it's quite nice. http://docs.openstack.org/developer/swift/logs.html | 19:58 |
notmyname | Dieterbe: how are they tied together then? | 19:58 |
notmyname | gholt: seems no worse to keep up to date than the default log_level setting that's "everywhere" :-) | 20:00 |
*** kevinc___ has quit IRC | 20:00 | |
gholt | Yes, I believe that's my complaint. :) | 20:01 |
*** byeager_ has quit IRC | 20:01 | |
gholt | But either way, no biggie. There's stuff that's already missing, but I guess that doesn't make having more missing right. | 20:01 |
notmyname | gholt: I hear you. I think fifieldt's complaint from yesterday (or at the summit) that we aren't keeping our config options up to date in our docs is still ringing in my ears, though | 20:03 |
*** erlon has quit IRC | 20:04 | |
notmyname | gholt: that is, some people say we should use oslo config because that will automatically keep our docs up to date | 20:04 |
Dieterbe | notmyname: in the wire protocol, it can be as simple as "service=openstack_swift what=load_time unit=ms env=prod 123 1234567890" | 20:04 |
gholt | Yeah, it might be because it's hard to do right now or something. It'd be nice if we can figure out some way to doc it in one place and have that propagate to all the places. | 20:04 |
*** erlon has joined #openstack-swift | 20:04 | |
gholt | If that's olso config fine, as long as it doesn't mean adding 500 useless dependencies along with it. | 20:04 |
notmyname | gholt: https://imgflip.com/i/90ddd | 20:04 |
notmyname | gholt: ya. I'm all for writing it down in one place | 20:05 |
creiht | lol | 20:05 |
gholt | Apparently we have something call log_custom_handlers though I have no idea what those do. | 20:06 |
notmyname | gholt: pandemicsyn wrote it for sentry or something | 20:06 |
dfg | clayg: put a comment on that ticket- https://review.openstack.org/#/c/92165/ what do you think? | 20:07 |
notmyname | Dieterbe: hmm..interesting. in that case it would simply be an update to the log handler that's currently writing statsd messages. but we'd also have to update the calls to include the units and other metadata. or register the metrics somewhere? that sounds complicated | 20:07 |
notmyname | Dieterbe: and at a higher lever, now I know about http://monitorama.com. And next year pandemicsyn and swifterdarrell should both go :-) | 20:08 |
notmyname | Dieterbe: have you looked at any of the stuff datadog is doing around monitoring? | 20:08 |
gholt | notmyname: There's really no place for config values in that logs.rst -- where would you like me to stick it? :) | 20:09 |
*** cds has quit IRC | 20:10 | |
Dieterbe | notmyname: oh sure | 20:11 |
notmyname | gholt: heh. IMO that page is for people who need to parse swift logs. so maybe a sentence/paragraph like "Long log lines may be truncated if the `log_max_line_length` is set. A truncated line has the first and last with dots in the middle" | 20:11 |
Dieterbe | notmyname: what about them? | 20:11 |
gholt | notmyname: Oh okay, cool | 20:11 |
dfg | clayg: wait a sec- do the SLOs do ned to be to the left of auth on the building the manifest? no right? i'll try some stuff out. maybe this isn't so bad | 20:12 |
notmyname | Dieterbe: I saw them at the Red Hat summit. seemed to have an interesting tool set (especially as someone working at a company that shows a lot of metrics). if you had seen them, Im curious about your take on their product | 20:12 |
*** byeager has joined #openstack-swift | 20:13 | |
Dieterbe | notmyname: yeah they have a really cool product/service, but i'm fundamentally against proprietary hosted monitoring | 20:13 |
notmyname | Dieterbe: yes I know :-) | 20:13 |
creiht | hah | 20:13 |
notmyname | Dieterbe: since you are online and chatty.... ;-) | 20:14 |
notmyname | Dieterbe: did you ever get more hardware to take care of your networking problem in your swift cluster? | 20:14 |
pandemicsyn | notmyname: https://twitter.com/obfuscurity/status/466036060401979392 | 20:15 |
Dieterbe | notmyname: i can't count anymore how many times i've chatted with alexis :) i like em and i hope their business will go great, but i'm all about open source :) | 20:15 |
pandemicsyn | probably one of the best talks i've seen | 20:15 |
Dieterbe | notmyname: yeah, it's all 10Gbps now, and we run a proxy server on every storage node | 20:16 |
notmyname | Dieterbe: ah, cool | 20:22 |
notmyname | *sigh* haters gonna hate | 20:22 |
pandemicsyn | "true dat!" | 20:22 |
notmyname | (*grumble*grumble*twitter) | 20:22 |
notmyname | pandemicsyn: people are wrong on the Internet, and I don't know how to fix that! | 20:22 |
pandemicsyn | i do, but that story ends with people calling me Emperor PandemicSyn | 20:24 |
*** igor_ has joined #openstack-swift | 20:25 | |
*** miqui has quit IRC | 20:26 | |
*** igor__ has joined #openstack-swift | 20:27 | |
*** Trixboxer has quit IRC | 20:28 | |
*** kenhui has joined #openstack-swift | 20:28 | |
*** foexle has quit IRC | 20:29 | |
*** igor_ has quit IRC | 20:30 | |
*** igor__ has quit IRC | 20:32 | |
openstackgerrit | gholt proposed a change to openstack/swift: New log_max_line_length option. https://review.openstack.org/94991 | 20:36 |
gholt | creiht: notmyname: ^ :) | 20:36 |
* clayg recently upped his $MaxMessageSize in rsyslog.conf | 20:38 | |
gholt | Heheh, UDP is what got us. Amazingly. We all though the kernel would auto-frag, but apparently not. | 20:40 |
portante | gholt: < 7? | 20:46 |
gholt | Heh, do I need to document that? ;) "1 ... 7" :) | 20:47 |
portante | why bother? | 20:47 |
*** eglynn has joined #openstack-swift | 20:47 | |
gholt | That was really just joking, btw. I'm completely fine with the doc additions. :D | 20:47 |
portante | why not like < 80 or something | 20:47 |
portante | ;) | 20:48 |
gholt | I don't really have an answer for that. It's what I picked and I wrote the code, heheh. | 20:48 |
portante | okay, sure | 20:48 |
notmyname | gholt: thanks. looks good. let me actually run tests, then I'll +2 | 20:58 |
*** blazesurfer has quit IRC | 21:01 | |
*** lpabon has quit IRC | 21:10 | |
*** kenhui has quit IRC | 21:10 | |
mkollaro | is it possible to set the number of handoff nodes swift should use? | 21:16 |
notmyname | mkollaro: there isn't a limit to the number swift uses. it can use up to all the other drives in the cluster. but you can configure how many it looks at in response to a request | 21:17 |
notmyname | mkollaro: by default it looks at 2*<replica count> (ie 6 for 3 replicas) | 21:17 |
mkollaro | notmyname: oh, cool | 21:17 |
mkollaro | notmyname: I should probably not ask for any documentation, right? :D | 21:18 |
notmyname | mkollaro: looking for it now | 21:18 |
notmyname | mkollaro: it's in the sample config https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L163 and the docs http://docs.openstack.org/developer/swift/deployment_guide.html#proxy-server-configuration | 21:19 |
notmyname | ta da! | 21:19 |
dfg | clayg: you there? | 21:19 |
*** kevinc___ has joined #openstack-swift | 21:21 | |
mkollaro | notmyname: awesome | 21:23 |
mkollaro | thanks a lot | 21:23 |
mkollaro | the state of swift documentation is quite good | 21:24 |
notmyname | mkollaro: thanks. we've worked hard at it (and still try to keep it mostly up to date) | 21:24 |
clayg | dfg: no | 21:24 |
clayg | gholt: I don't get why 7 is magic for doing the either split or truncate dance? | 21:24 |
clayg | who would run with < 7 anyway? | 21:25 |
mkollaro | notmyname: :) | 21:25 |
notmyname | clayg: who would run for less than about 50? | 21:25 |
clayg | notmyname: +1 | 21:25 |
clayg | or idk, 1500/mtu :P | 21:25 |
mkollaro | notmyname: so, it should be possible to damage disks one by one until there is only one left and it would hold all the data, right? assuming it has enough capacity | 21:25 |
notmyname | :-) | 21:25 |
dfg | clayg: i think- that we just move slo and dlo to the right of auth with no code change and there's no bug. | 21:26 |
clayg | dfg: so don't do authz for subrequests? | 21:26 |
notmyname | mkollaro: correct. (but you'd need at least 2 disks to still be able to write new data so that you have quorum in a 3-replica cluster) | 21:26 |
dfg | clayg: its like i decided that slo has to be to the left of auth a long time abo and it actually doesn't\ | 21:26 |
dfg | clayg: the authorize call does the authz, so it still gets called | 21:27 |
clayg | dfg: i don't think that's true | 21:27 |
dfg | i just tried it | 21:27 |
notmyname | mkollaro: back in Hong Kong, clayg showed a demo of a 6 node cluster that had zero client impact when he had 4 of the 6 drives offline | 21:27 |
clayg | dfg: proxy.server pops that bad boy out every request - tempauth adds it back in but keystone didn't | 21:27 |
clayg | dfg: what did you try exactly? | 21:28 |
*** igor_ has joined #openstack-swift | 21:28 | |
mkollaro | notmyname: so writing new data wouldn't work, but the old ones would be still be on that last single node with only a single replica, is that correct? | 21:28 |
dfg | on master, i moved slo and dlo right to the left pf the proxy-server in the pipeline. then everything works | 21:28 |
notmyname | mkollaro: correct | 21:28 |
notmyname | mkollaro: although I'd suggest you try not to let it get to that point :-) | 21:28 |
dfg | clayg: ^^ the memcache restart and building the slo where a segment is in a container i don't have acces to i get a 403 on building | 21:29 |
clayg | notmyname: you might have to ptl me on https://review.openstack.org/#/c/63327/ - or maybe we can try and get consense at the next swift meeting - but I"m apparenlty holding that one up | 21:29 |
mkollaro | notmyname: I wrote some tests for this stuff https://github.com/mkollaro/destroystack/blob/master/destroystack/test_swift_small_setup.py | 21:29 |
mkollaro | notmyname: but somehow my test for this case is failing...but I think it's rather something in my code | 21:29 |
notmyname | clayg: ah, thanks for pointing it out. I hadn't looked at it recently | 21:30 |
clayg | portante: also I find it surprising that you of all people would so laissez fair about changing swob's public interface what with your investment in other public interfaces inside of swift? | 21:30 |
gholt | clayg: No biggie on the 7 thing if you guys want to change it. I just arbitrarily picked it. | 21:31 |
dfg | clayg: and trying to read a SLO where a segment is in a container you no longer have read access to. it all works just fine. | 21:31 |
portante | clayg: man! | 21:31 |
portante | you are right, but ... what are we talking about? | 21:32 |
portante | ;) | 21:32 |
clayg | as in you can't read the slo if the container removes and acl?! how could that... | 21:32 |
dfg | clayg: yes | 21:32 |
*** igor_ has quit IRC | 21:32 | |
notmyname | gholt: clayg: I care soooo much about the 7 in that patch...that I went ahead and approved it as-is | 21:32 |
clayg | portante: they return value of the range thing for swob that torgomatic was trying to clean up or whatever... https://review.openstack.org/#/c/63327 | 21:32 |
clayg | heh | 21:32 |
portante | clayg: thanks, checking | 21:33 |
portante | hmm | 21:34 |
portante | caught | 21:34 |
portante | duplicity | 21:34 |
portante | two-faced | 21:34 |
portante | harvey dent | 21:34 |
clayg | we're all entitled to be a hypocrite whenver it suits us - don't let anyone make you feel bad about it | 21:34 |
portante | I'll just flip a coin | 21:35 |
dfg | clayg: it goes through tempauth, sets up the user shit, then the authorize uses that user for each of the containers. | 21:35 |
clayg | lol | 21:35 |
portante | ;) | 21:35 |
dfg | pretty sure keystone would do same thing | 21:35 |
clayg | dfg: but what set's the authorize callback on the subdequent requests? | 21:35 |
portante | clayg, torgomatic: arguably a new method we be created on swob that does the sane thing, the old method behavior is preserved, and then removed from the core swift code in favor of the new | 21:36 |
dfg | clayg: swift.authorize is copied over in make_subrequest | 21:36 |
clayg | and it some how managed to do that before it's called - i think i see now | 21:37 |
clayg | i love it! | 21:37 |
dfg | ya | 21:37 |
dfg | wait what? | 21:37 |
clayg | dfg: I can't explain love... it's just something you feel. | 21:38 |
dfg | ... | 21:38 |
dfg | are you sure you can't mail that whiskey? | 21:38 |
creiht | lol | 21:38 |
* clayg moves on | 21:38 | |
clayg | dfg: so you think we can close the bug with a doc fix for the pipeline re-order? | 21:39 |
dfg | clayg: yes | 21:39 |
clayg | dfg: great! | 21:40 |
dfg | ya. | 21:40 |
dfg | i'm not going to go into why this is really f-ing annoying. but it is... | 21:41 |
dfg | oh well. | 21:41 |
portante | what a dance | 21:41 |
portante | masterful | 21:41 |
portante | gotta go | 21:42 |
dfg | ? | 21:42 |
clayg | dfg: why not? sometimes it helps to vent. also I think the pipeline approach is gunna work great | 21:47 |
clayg | oh hrmm.... make_env may still need to be updated to copy over keystone.identity | 21:48 |
dfg | clayg: that sounds likely. | 21:49 |
dfg | clayg: i just had to jump through some hoops to get all this stuff working with sos and it turns out that i was just wrong about slo having to be before auth. anyway- i'll try it out to be sure. but I think thats right. | 21:50 |
dfg | so its not a huge deal either way. | 21:50 |
clayg | hrmm... that doesn't sound like you're all that annoyed - I think you're repressing | 21:51 |
dfg | :) | 21:52 |
gholt | depressing, maybe | 21:58 |
dfg | its alright- at least i didn't have to sit through a karaoke competition today :p | 22:02 |
clayg | acoles_away: chmouel: dfg: I needed all of this https://gist.github.com/clayg/8378436d6772ac6fae3c to fix https://bugs.launchpad.net/swift/+bug/1315133 | 22:03 |
dfg | clayg: lgtm | 22:04 |
clayg | dfg: the other fix might be to fix make_env to be more agressive about copying over the whole env | 22:05 |
clayg | dfg: it it wasn't for the fact that keystone uses 1000 headers to build up keystone.identity we could avoid the change to keystone_auth just by copying over all the headers for the subrequests | 22:06 |
clayg | i mean we could whitelist all of those keystone headers that need to be in the env for the subsequent call in authorize to work as is - but that seems stupid | 22:07 |
dfg | clayg: ya- thats how torgomatic wrote it but me and gholt were talking about it and how we like knowing what we were copying. but I'm kinda thinking that copying over the whole thing may have been better | 22:07 |
clayg | so addig full on re-use of cached keystone.identity seems like the other option if we need make_env to stay as a whitelist | 22:07 |
clayg | ok great, so we can just fix the pipleing and that fixes it everywhere as long as we change either keystone or common.wsgi | 22:08 |
dfg | clayg: ya i think so | 22:09 |
clayg | weeee | 22:11 |
*** igor_ has joined #openstack-swift | 22:28 | |
*** jergerber has quit IRC | 22:32 | |
*** igor_ has quit IRC | 22:33 | |
*** kevinc___ has quit IRC | 22:38 | |
*** kevinc___ has joined #openstack-swift | 22:43 | |
*** byeager has quit IRC | 22:53 | |
*** ZBhatti_ has joined #openstack-swift | 23:02 | |
*** mkollaro has quit IRC | 23:09 | |
*** openstackgerrit has quit IRC | 23:19 | |
*** openstackgerrit has joined #openstack-swift | 23:20 | |
*** igor_ has joined #openstack-swift | 23:29 | |
*** igor_ has quit IRC | 23:34 | |
*** elambert has quit IRC | 23:39 | |
*** shakayumi has joined #openstack-swift | 23:41 | |
openstackgerrit | A change was merged to openstack/swift: New log_max_line_length option. https://review.openstack.org/94991 | 23:42 |
*** shakayumi has quit IRC | 23:50 | |
*** r-daneel has quit IRC | 23:50 | |
*** mwstorer has quit IRC | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!