*** saschpe has quit IRC | 00:00 | |
peluse | torgomatic/clayg: did you guys sync on the acct rollup question qrt whether we need header resonses from the backend to includex index values (as well as header values to clients including names)? | 00:00 |
---|---|---|
*** saschpe has joined #openstack-swift | 00:05 | |
clayg | peluse: I just talked to torgomatic and he says he doesn't care any more | 00:10 |
clayg | peluse: *I* care and don't want a bunch of string mungy code responsible for getting out the index, maybe having both was "ok" - but it seemed sorta premature | 00:11 |
peluse | clayg: sounds good... thx. So I think its good to go then (meaning nothing pending to change on the patch set) right? | 00:17 |
clayg | nothing pending for me - i've on to manually testing and reviewing the reconciler change | 00:19 |
*** openstack has joined #openstack-swift | 00:20 | |
*** RockKuo has joined #openstack-swift | 00:22 | |
*** matsuhashi has joined #openstack-swift | 00:26 | |
peluse | thanks clayg | 00:28 |
*** sungju has joined #openstack-swift | 00:29 | |
*** shri has quit IRC | 00:39 | |
*** d89 has joined #openstack-swift | 00:40 | |
*** tdasilva has quit IRC | 00:41 | |
*** h6w has joined #openstack-swift | 00:48 | |
h6w | Hi. I'm trying to understand the need for zones. What do zones do that storage nodes don't? | 00:49 |
notmyname | h6w: zones allow you to tell swift about your physical failure domains | 00:51 |
notmyname | h6w: and swift will attempt to place replicas of the data across different zones | 00:51 |
notmyname | eg so a single rack power issue doesn't cause durability or availability problems | 00:51 |
notmyname | or a top of rack switch | 00:52 |
h6w | So I should put all storage nodes attached to the same switch on one zone? | 00:52 |
*** yuanz has joined #openstack-swift | 00:54 | |
notmyname | h6w: yes. but there are a few tricks you'll learn from prod: more zones == better (especially when len(zones) > len(replicas), iff they are actually different failure domains; keep zones roughly equal in capacity | 00:54 |
h6w | notmayname: That sounds like a good idea. But if I have only one zone (because I only have one switch) that shouldn't stop anything from working, should it? I believe I saw you do this in your presentation at LCA2014. | 00:56 |
notmyname | h6w: no. if you have just one switch, one zone is a great idea. that sounds exactly like what you should do | 00:57 |
*** peluse has quit IRC | 00:57 | |
*** peluse has joined #openstack-swift | 00:57 | |
*** yuan has quit IRC | 00:57 | |
h6w | Thanks. This old bug report was confusing me! https://answers.launchpad.net/swift/+question/146048 "You are only creating 1 zone - try using at least 3 and see if that works." | 00:58 |
notmyname | h6w: ah. that's _really_ old, and we solved that issue a long time ago. originally, you were required to have at least as many zones and replicas, but that hasn't been true for quite some time now (2 years maybe?) | 01:00 |
h6w | Yeah. I saw the date. Unfortunately google doesn't. :-p | 01:00 |
notmyname | jeblair: clarkb: just looking back through some old IRC logs...why is the -infra team using swift3? | 01:01 |
notmyname | ...asked in the -infra channel | 01:03 |
notmyname | glange: are you still point for billing/utilization for cloud files? I cam across something today I wanted to ask you about | 01:04 |
torgomatic | notmyname: do let me know what's up; I'm sort of curious why swift3 myself | 01:04 |
notmyname | :-) | 01:04 |
* portante is also curious | 01:07 | |
*** krtaylor has quit IRC | 01:07 | |
notmyname | that may have only been in relation to putting it on stackforge, but that just answers one question with another | 01:08 |
notmyname | but I was seeing it temporally associated with -infra logging to swift, so I may have been confused there | 01:08 |
*** RockKuo has quit IRC | 01:09 | |
notmyname | looks like swift3 is on stackforge with a ptl and -core team (all from NTT) | 01:10 |
notmyname | with no-op gate jobs, so I'm not sure what advantages they are getting | 01:10 |
*** krtaylor has joined #openstack-swift | 01:11 | |
*** tdasilva has joined #openstack-swift | 01:17 | |
*** krtaylor has quit IRC | 01:20 | |
*** tdasilva has left #openstack-swift | 01:21 | |
*** krtaylor has joined #openstack-swift | 01:23 | |
h6w | notmyname: So both the region and the zone are separate failure points? Since in your presentation you have the same zone across two regions, they can't be the same failure point, can they? | 01:23 |
notmyname | torgomatic: portante: seems that devstack references swift3 and as such there is a desire to not reference github (since that doesn't have good uptime) and instead reference the openstack git server. therefore stackforge | 01:24 |
notmyname | torgomatic: portante: I do not know what that means for where swift3 patches should go | 01:24 |
notmyname | h6w: they are nested, or tiered. IOW, regions have zones have servers have drives | 01:25 |
h6w | (Although you were running it all on one laptop, so theoretically it's all one point of failure. :-p) | 01:25 |
notmyname | h6w: just as 2 swift accounts can have an "images" container, you can set up a "zone1" in multiple regions | 01:26 |
portante | wow, move a project so that devstack can work? | 01:26 |
portante | really? | 01:26 |
notmyname | h6w: demoware! | 01:26 |
portante | huh | 01:26 |
portante | guess this is the new world order | 01:26 |
notmyname | portante: well, it looks like the move to devstack patch still references the github repo as the origin. I don't know if that means it's upstream or just where it used to be | 01:27 |
notmyname | mordred: ^ ? | 01:27 |
portante | if it is a pure indirection, that seems okay | 01:27 |
portante | stackforge would be a "cache" of the real project | 01:28 |
creiht | as a side note, github seems to be up more, now that openstack ci isn't pointed at it anymore | 01:28 |
* creiht runs | 01:28 | |
portante | yeah, leave us standing around here! | 01:28 |
portante | ;) | 01:28 |
*** nosnos has joined #openstack-swift | 01:29 | |
jeblair | notmyname: it just moved on friday, i don't think they have had time to set up gate jobs or move the devstack uri | 01:30 |
*** matsuhashi has quit IRC | 01:30 | |
notmyname | jeblair: is it a cache or a mirror or a new authoritative location? | 01:30 |
jeblair | notmyname: i believe they moved so that they could test devstack changes, so i expect them to take advantage of the ci | 01:30 |
mordred | notmyname: I would expect it to be a new authoitative location | 01:30 |
*** matsuhashi has joined #openstack-swift | 01:30 | |
mordred | I would NOT expect me to be able to spell though | 01:31 |
jeblair | mordred, notmyname: that is my understanding | 01:31 |
notmyname | portante: it really is a new world order ;-) | 01:31 |
notmyname | jeblair: mordred: thanks for the info :-) | 01:31 |
portante | jeblair, mordred: so it did not move so that devstack would work. k | 01:31 |
*** piousbox has joined #openstack-swift | 01:38 | |
*** jasondotstar has joined #openstack-swift | 01:41 | |
clayg | torgomatic: the x-timestamp that direct_get_oldest_storage_policy_index uses is tied to the created_at key in the container db's stat's table - which doesn't get reset on container "recreate" | 01:46 |
* clayg is realy hating on container recreate lately | 01:46 | |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Make initialization test failure more explicit https://review.openstack.org/82688 | 01:48 |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Attempt to narrow race conditions in DB connect https://review.openstack.org/82689 | 01:48 |
clayg | idk, maybe the oldest x-PUT-timestamp will turn out to be correct | 01:48 |
portante | clayg: see above, I believe there are more race conditions in the db connection construction code with deleting dbs/creating dbs | 01:50 |
*** lpabon has joined #openstack-swift | 01:50 | |
portante | I might not really understand this well enough, but it looks like we'll always need to lock the parent directory to avoid them | 01:50 |
*** lpabon has quit IRC | 01:50 | |
clayg | portante: I had sorta convinced myself I care less about that races that occur around the reclaim age timeframe | 01:53 |
*** yuan has joined #openstack-swift | 01:56 | |
*** peluse has quit IRC | 01:56 | |
*** RockKuo has joined #openstack-swift | 01:57 | |
portante | clayg: what was the reasoning? | 01:57 |
*** peluse has joined #openstack-swift | 01:57 | |
*** yuanz has quit IRC | 01:58 | |
portante | and there is a race during initialization, too | 01:58 |
portante | clayg: we move the temp file under the directory lock, release the lock and then attempt to make the connection | 01:59 |
portante | so two creates will succeed, where operations will be performed to the new deleted database | 01:59 |
portante | I think | 01:59 |
portante | I found that the above changes made it so that I could more easily follow what was happening with the usage of the db_file field and all the existence checks | 02:00 |
*** shri has joined #openstack-swift | 02:00 | |
*** fifieldt_ has joined #openstack-swift | 02:00 | |
*** shri has quit IRC | 02:02 | |
portante | ... or I might have lost my mind entirely ... can't seem to find anything at times between my ears, so mileage may vary ... ;) | 02:04 |
*** d89 has quit IRC | 02:06 | |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Attempt to narrow race conditions in DB connect https://review.openstack.org/82689 | 02:06 |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Make initialization test failure more explicit https://review.openstack.org/82688 | 02:06 |
clayg | portante: yeah i'm not sure, i guess folks just aren't patient enough that there's many requests recreating databases exactly two weeks after they've been deleted to hit those races? | 02:19 |
portante | saw this on a brand new install | 02:25 |
*** saschpe has quit IRC | 02:26 | |
portante | clayg: see the initialize() code, where it locks the directory and then releases it | 02:26 |
portante | I might not have this right | 02:27 |
*** saschpe has joined #openstack-swift | 02:28 | |
portante | I'll pick this up tomorrow, possibly, in all day meetings for the next three days, so might have more time. :) | 02:28 |
*** jasondotstar has quit IRC | 02:30 | |
*** haomaiwang has joined #openstack-swift | 02:31 | |
*** d89 has joined #openstack-swift | 02:32 | |
*** d89 has quit IRC | 02:34 | |
*** d89 has joined #openstack-swift | 02:34 | |
h6w | Is it possible to have an active-active type GDC? Would the proxies talk to each other or can the storage nodes be members of more than one proxy? | 02:48 |
h6w | oic. https://swiftstack.com/images/posts/swift-global-replication/fetch-newest-3-regions-3-replicas.png So each proxy thinks its the only proxy and knows about all storage nodes, correct? | 02:51 |
creiht | anyone know if they have set a date for the paris openstack summit? | 02:53 |
creiht | notmyname: -^ ? | 02:53 |
*** h6w has quit IRC | 03:08 | |
*** matsuhashi has quit IRC | 03:12 | |
*** h6w has joined #openstack-swift | 03:16 | |
*** haomaiw__ has joined #openstack-swift | 03:28 | |
*** nosnos has quit IRC | 03:29 | |
*** matsuhashi has joined #openstack-swift | 03:30 | |
*** haomaiwang has quit IRC | 03:32 | |
*** chandankumar_ has joined #openstack-swift | 03:32 | |
*** matsuhashi has quit IRC | 03:36 | |
hugokuo | h6w: correct ... But the diagram that you pasted is for global cluster. It's a bit complex then a regular deployment in a single datacenter tho. | 03:38 |
*** chandankumar_ has quit IRC | 03:40 | |
*** erlon has quit IRC | 03:42 | |
madhuri_ | portante: Are you there? | 03:45 |
madhuri_ | clayg: ping? | 03:52 |
*** chandankumar_ has joined #openstack-swift | 03:55 | |
*** gvernik has joined #openstack-swift | 04:20 | |
*** gvernik has quit IRC | 04:21 | |
*** matsuhashi has joined #openstack-swift | 04:23 | |
*** chandankumar_ has quit IRC | 04:23 | |
*** nosnos has joined #openstack-swift | 04:25 | |
*** fifieldt_ has quit IRC | 04:28 | |
h6w | hugokuo: Thanks. Yes. I mistakenly set up two independent clusters with their own proxy, thinking that the proxies did the negotiation. Now setting up a VPN so that they can see each other's storage nodes. | 04:35 |
*** zaitcev has quit IRC | 04:50 | |
*** ppai has joined #openstack-swift | 04:59 | |
*** matsuhashi has quit IRC | 05:14 | |
*** fifieldt_ has joined #openstack-swift | 05:17 | |
*** matsuhashi has joined #openstack-swift | 05:18 | |
Anju1 | if all the request are coming for one partition and if that partition gets full. Is there a mechanism to avoid this scenario? | 05:37 |
*** fifieldt_ has quit IRC | 05:39 | |
*** chandan_kumar has quit IRC | 05:48 | |
*** manish_ has joined #openstack-swift | 05:49 | |
*** nshaikh has joined #openstack-swift | 05:58 | |
*** psharma has joined #openstack-swift | 06:17 | |
*** godb has joined #openstack-swift | 06:26 | |
godb | good afternoon ~ :) | 06:26 |
godb | any body alive?? | 06:26 |
openstackgerrit | Yuan Zhou proposed a change to openstack/swift: Fixes versioning function tests with non-zero default policy https://review.openstack.org/82515 | 06:27 |
godb | i have some question.. how can i delete .ts (tomestone) file?? i check rexirer.py script, buf i didn't know how can i use it | 06:29 |
godb | object auditor and replicator, expirer are good operations | 06:30 |
godb | but they do not delete .ts file after 1 week expirer | 06:30 |
*** sungju has quit IRC | 06:32 | |
*** manish_ has quit IRC | 06:35 | |
*** fifieldt_ has joined #openstack-swift | 07:00 | |
*** manish_ has joined #openstack-swift | 07:10 | |
manish_ | If i am trying to upload an object of 3 GB size and in between my connection gets terminated....so will i have a part file saved on disk or not? | 07:10 |
manish_ | What i am trying to know, that object upload and object PUT on disk are being done in parallel or object PUT on disk will only start once proxy server receives the entire file of 3 GB? | 07:12 |
ahale | the proxy writes to the object servers immediately, but to a tmp file thats moved in place when the 3GB is complete, theres no way to resume uploading to that temp file though | 07:13 |
ahale | the way to do that I guess would be to split the 3GB file into smaller chunks and upload them and a multipart object manifest to recombine them when the full object is requested | 07:14 |
manish_ | but if i am not doing multi-part upload,..then also while upload in progress, Object server will start writing on disk? | 07:16 |
ahale | yes but not in a way thats useful if the upload terminated prematurely | 07:19 |
*** sv has joined #openstack-swift | 07:21 | |
*** sv is now known as Guest47999 | 07:21 | |
manish_ | ok..i understood.. | 07:22 |
manish_ | As per the code it seems that Proxy server takes the object in chunks of size 64 KB from WSGI server.. | 07:23 |
*** fifieldt_ has quit IRC | 07:24 | |
manish_ | So the question is, WSGI will get the 3GB/64KB chunks from Client/browser and then start sending it to Proxy server.....or it gets the first 64 KB frfom client and send to proxy server, without waiting for the complete object? | 07:25 |
*** Guest47999 has quit IRC | 07:25 | |
*** saju_m has joined #openstack-swift | 07:30 | |
ahale | not sure what you mean with the wsgi, the first wsgi pipeline is within the proxy-server, it will not wait for the complete object. It will start sending to object server immediately - the files arent spooled on a proxy until all uploaded from client | 07:33 |
openstackgerrit | Zhang Hua proposed a change to openstack/swift: Add profiling middleware in Swift https://review.openstack.org/53270 | 07:44 |
*** sungju has joined #openstack-swift | 07:46 | |
manish_ | ok | 07:55 |
openstackgerrit | Victor Stinner proposed a change to openstack/python-swiftclient: Python 3: Get compatible types from six https://review.openstack.org/82552 | 07:56 |
*** sungju has quit IRC | 08:03 | |
godb | hi, i hvve some question. how can i delete .ts file (tombstone) ?? | 08:03 |
godb | my system suffer from needless empty file | 08:05 |
godb | object - auditor/replicator/expirer are good operation. | 08:07 |
openstackgerrit | Yuan Zhou proposed a change to openstack/swift: Update swift-object-info/swift-get-nodes to be storage policy aware https://review.openstack.org/82734 | 08:08 |
openstackgerrit | Zhang Hua proposed a change to openstack/swift: Add profiling middleware in Swift https://review.openstack.org/53270 | 08:16 |
*** JelleB is now known as a1|away | 08:18 | |
*** sungju has joined #openstack-swift | 08:20 | |
*** sungju has quit IRC | 08:23 | |
*** jasondotstar has joined #openstack-swift | 08:27 | |
*** mlipchuk has joined #openstack-swift | 08:29 | |
openstackgerrit | Madhuri Kumari proposed a change to openstack/swift: Removed hard coded location of ring https://review.openstack.org/82742 | 08:47 |
*** matsuhashi has quit IRC | 08:49 | |
*** saju_m has quit IRC | 08:50 | |
*** saju_m has joined #openstack-swift | 08:52 | |
*** tanee-away is now known as tanee | 08:53 | |
*** nacim has joined #openstack-swift | 08:55 | |
*** matsuhashi has joined #openstack-swift | 08:56 | |
*** mmcardle has joined #openstack-swift | 08:57 | |
*** sungju has joined #openstack-swift | 08:59 | |
*** manish_ has quit IRC | 09:00 | |
*** mlipchuk has quit IRC | 09:06 | |
*** manish_ has joined #openstack-swift | 09:06 | |
*** ppai has quit IRC | 09:09 | |
*** ppai has joined #openstack-swift | 09:10 | |
*** chandan_kumar has joined #openstack-swift | 09:13 | |
*** Midnightmyth has joined #openstack-swift | 09:13 | |
*** matsuhashi has quit IRC | 09:14 | |
*** matsuhashi has joined #openstack-swift | 09:18 | |
*** godb has quit IRC | 09:23 | |
*** mlipchuk has joined #openstack-swift | 09:28 | |
*** sungju has quit IRC | 09:35 | |
*** saju_m has quit IRC | 09:38 | |
*** nosnos has quit IRC | 09:39 | |
*** sungju has joined #openstack-swift | 09:45 | |
*** bvandenh has quit IRC | 09:47 | |
*** sungju has quit IRC | 09:51 | |
*** saju_m has joined #openstack-swift | 09:56 | |
*** d89 has quit IRC | 10:04 | |
*** matsuhashi has quit IRC | 10:08 | |
*** sungju has joined #openstack-swift | 10:09 | |
*** matsuhashi has joined #openstack-swift | 10:10 | |
*** bvandenh has joined #openstack-swift | 10:10 | |
*** mlipchuk has quit IRC | 10:11 | |
*** judd7_ has quit IRC | 10:16 | |
*** mlipchuk has joined #openstack-swift | 10:16 | |
*** sungju has quit IRC | 10:24 | |
*** sungju has joined #openstack-swift | 10:33 | |
*** sungju has quit IRC | 10:33 | |
*** sungju has joined #openstack-swift | 10:40 | |
openstackgerrit | Madhuri Kumari proposed a change to openstack/swift: Removed hard coded location of ring https://review.openstack.org/82742 | 10:44 |
*** jasondotstar has quit IRC | 10:47 | |
*** sungju has quit IRC | 10:50 | |
*** ppai has quit IRC | 10:55 | |
hugokuo | hmm.... bunch of Handoff requested log for HEAD in proxy log.... is it a bug? It's unnecessary to show HEAD request's handoff log all the time. Right ? (Tested in Swift 1.13.0) | 10:57 |
*** pconstantine_ has quit IRC | 10:58 | |
*** pconstantine_ has joined #openstack-swift | 10:59 | |
* hugokuo report https://bugs.launchpad.net/swift/+bug/1297214 | 11:02 | |
*** gvernik has joined #openstack-swift | 11:02 | |
*** ppai has joined #openstack-swift | 11:03 | |
*** tanee is now known as tanee-away | 11:06 | |
*** saju_m has quit IRC | 11:07 | |
*** saju_m has joined #openstack-swift | 11:08 | |
*** saju_m has quit IRC | 11:09 | |
*** madhuri has joined #openstack-swift | 11:10 | |
*** madhuri_ has quit IRC | 11:12 | |
*** lpabon has joined #openstack-swift | 11:14 | |
*** saju_m has joined #openstack-swift | 11:15 | |
openstackgerrit | Zhang Hua proposed a change to openstack/swift: Add profiling middleware in Swift https://review.openstack.org/53270 | 11:26 |
*** RockKuo has quit IRC | 11:32 | |
openstackgerrit | Christian Schwede proposed a change to openstack/python-swiftclient: Make bin/swift testable part 1 https://review.openstack.org/76487 | 11:33 |
*** pconstantine_ has quit IRC | 11:33 | |
*** saju_m has quit IRC | 11:38 | |
*** saju_m has joined #openstack-swift | 11:38 | |
*** tanee-away is now known as tanee | 11:42 | |
*** tanee is now known as tanee-away | 11:53 | |
*** JuanManuelOlle has joined #openstack-swift | 12:09 | |
*** manish_ has quit IRC | 12:10 | |
*** matsuhashi has quit IRC | 12:16 | |
*** matsuhashi has joined #openstack-swift | 12:23 | |
*** tdasilva has joined #openstack-swift | 12:33 | |
*** sriram has joined #openstack-swift | 12:34 | |
*** sriram has quit IRC | 12:34 | |
*** sriram has joined #openstack-swift | 12:34 | |
*** zul has quit IRC | 12:38 | |
*** JuanManuelOlle1 has joined #openstack-swift | 12:40 | |
*** zul has joined #openstack-swift | 12:41 | |
*** saju_m has quit IRC | 12:41 | |
*** JuanManuelOlle has quit IRC | 12:41 | |
*** mkollaro has joined #openstack-swift | 12:44 | |
*** psharma has quit IRC | 12:45 | |
*** Trixboxer has joined #openstack-swift | 12:52 | |
*** jasondotstar has joined #openstack-swift | 12:53 | |
*** Midnightmyth has quit IRC | 12:54 | |
*** saju_m has joined #openstack-swift | 12:55 | |
*** saju_m has quit IRC | 12:59 | |
*** saju_m has joined #openstack-swift | 13:00 | |
*** mkollaro has quit IRC | 13:03 | |
*** mkollaro1 has joined #openstack-swift | 13:03 | |
*** mkollaro1 is now known as mkollaro | 13:03 | |
*** matsuhashi has quit IRC | 13:03 | |
*** tanee-away is now known as tanee | 13:04 | |
*** saju_m has quit IRC | 13:04 | |
*** saju_m has joined #openstack-swift | 13:05 | |
*** erlon has joined #openstack-swift | 13:06 | |
*** saju_m has quit IRC | 13:09 | |
*** saju_m has joined #openstack-swift | 13:10 | |
*** a1|away is now known as JelleB | 13:19 | |
*** ChanServ changes topic to "the gerrit event stream is currently hung, blocking all testing. troubleshooting is in progress (next update at 14:00 utc)" | 13:22 | |
*** changbl has quit IRC | 13:22 | |
openstackgerrit | Christian Schwede proposed a change to openstack/python-swiftclient: Add functional tests for python-swiftclient https://review.openstack.org/76355 | 13:26 |
*** ChanServ changes topic to "Current Swift Release: 1.13.0 | Priority Reviews: https://wiki.openstack.org/wiki/Swift/PriorityReviews | Channel Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/" | 13:30 | |
*** Midnightmyth has joined #openstack-swift | 13:35 | |
*** mmcardle has quit IRC | 13:40 | |
*** mmcardle has joined #openstack-swift | 14:00 | |
*** nacim has quit IRC | 14:02 | |
*** ppai has quit IRC | 14:06 | |
*** gvernik has quit IRC | 14:07 | |
*** dmsimard has joined #openstack-swift | 14:08 | |
*** bsdkurt1 has joined #openstack-swift | 14:08 | |
*** bsdkurt has quit IRC | 14:10 | |
*** tanee is now known as tanee-away | 14:15 | |
*** jasondotstar has quit IRC | 14:21 | |
*** mkollaro has quit IRC | 14:26 | |
creiht | cschwede: ping | 14:34 |
portante | 755996 | 14:36 |
portante | use it quick! | 14:37 |
creiht | cschwede: nm... I'll just leave a review :) | 14:39 |
*** mkollaro has joined #openstack-swift | 14:41 | |
*** jasondotstar has joined #openstack-swift | 14:46 | |
*** piyush1 has joined #openstack-swift | 14:53 | |
*** byeager has joined #openstack-swift | 15:05 | |
*** JuanManuelOlle1 has quit IRC | 15:06 | |
*** nacim has joined #openstack-swift | 15:09 | |
*** saju_m has quit IRC | 15:13 | |
*** changbl has joined #openstack-swift | 15:20 | |
notmyname | good morning, world | 15:24 |
*** madhuri_ has joined #openstack-swift | 15:34 | |
portante | notmyname: https://www.youtube.com/watch?v=ETE135Ib1ew | 15:35 |
*** nshaikh has quit IRC | 15:40 | |
*** krtaylor has quit IRC | 15:42 | |
creiht | anyone know what version of flake8 we are supposed to use with swift? | 15:43 |
dfg | torgomatic: did you see this: https://bugs.launchpad.net/swift/+bug/1296941 ? its really weird. | 15:49 |
*** krtaylor has joined #openstack-swift | 15:50 | |
portante | creiht: is it not in the test-requirements.txt file? | 15:50 |
creiht | no | 15:52 |
portante | hmmm | 15:52 |
creiht | neither is pep8 | 15:52 |
creiht | heh | 15:53 |
portante | so then it is just assumed the users development environment has the right stuff? | 15:53 |
creiht | yeah | 15:53 |
portante | I know our SAIO lists a bunch of base packages needed | 15:53 |
portante | hmmm | 15:53 |
creiht | portante: I recently had an issue because I had a pep8 that was too new | 15:54 |
portante | that seems bad | 15:54 |
*** jergerber has joined #openstack-swift | 15:54 | |
creiht | I'm testing clay's tox.ini changes | 15:54 |
creiht | and can't figure out how to make it work | 15:55 |
portante | what changes are those? | 15:55 |
creiht | https://review.openstack.org/#/c/77072 | 15:55 |
creiht | man... 3 reviews this morning and all - :/ | 15:57 |
creiht | time for lunch | 15:57 |
creiht | there was a day when things were simple | 15:58 |
creiht | now you have to know all the right incantations in the right postures | 15:58 |
portante | we used to spend the summers playing football, baseball, swimming, riding bikes, ghost in the graveyard, etc. | 15:59 |
creiht | haha | 15:59 |
notmyname | creiht: still there? | 16:01 |
notmyname | portante: what are you working on today in the swift world? :-) | 16:03 |
*** madhuri_ has quit IRC | 16:05 | |
*** mlipchuk has quit IRC | 16:12 | |
*** JuanManuelOlle has joined #openstack-swift | 16:14 | |
*** JuanManuelOlle has quit IRC | 16:23 | |
*** JuanManuelOlle has joined #openstack-swift | 16:23 | |
*** csd has joined #openstack-swift | 16:31 | |
*** madhuri_ has joined #openstack-swift | 16:35 | |
madhuri_ | anyone there? | 16:37 |
*** mmcardle has quit IRC | 16:41 | |
*** mmcardle1 has joined #openstack-swift | 16:52 | |
*** piyush1 has quit IRC | 16:56 | |
portante | notmyname: we are trying to debug why our little PUT test program run against swift does not work | 16:56 |
portante | so I am reviewing the db.py code and trying to find out why | 16:56 |
notmyname | ok | 16:56 |
portante | notmyname: do you need anything in particular? | 16:57 |
notmyname | we've got a regression in 1.13.0 because of the dlo/slo move to middleware that needs to be fixed before icehouse. just found it last night and confirmed with dfg and glange this morning | 16:57 |
portante | oh | 16:57 |
portante | I'd be willing to help | 16:57 |
notmyname | specifically, the proxy log lines aren't doing the right thing with manifests | 16:58 |
portante | okay, is that the bug pointed to above? | 16:58 |
notmyname | not sure that there is a bug filed yet | 16:58 |
notmyname | given the following client behavior: given the following | 16:58 |
notmyname | given the following client behavior: https://gist.github.com/notmyname/8dff98e4354cb6e1f93f | 16:59 |
notmyname | the proxy logs look like this: https://gist.github.com/notmyname/c3a747715c7100b4c5c2 | 16:59 |
portante | 1296941 | 16:59 |
notmyname | a few things to notice: | 16:59 |
notmyname | the manifest fetch doesn't have swift.source set, even though it's an internal-only request | 16:59 |
notmyname | there is no final log line for the total request. ie a final status code and bytes transferred line | 17:00 |
notmyname | and that's a regression from pre-1.13.0 | 17:01 |
glange | I don't think there was ever a final log line | 17:01 |
portante | yes, probably because of how proxy_logging works | 17:01 |
portante | glange: meaning, in 1.12.0 we also don't get a final log line? | 17:01 |
portante | or is that part of the regression? | 17:01 |
glange | yeah, no final log line ever | 17:01 |
notmyname | ah | 17:01 |
notmyname | glange: even with the original manifest fetch? | 17:01 |
glange | I think the only regression is that the manifest get should have the total bytes transferred | 17:02 |
glange | notmyname: yeah, I think so | 17:02 |
notmyname | ok, same effective thing. but you put it more simply than it did glange :-) | 17:02 |
portante | hmmm | 17:03 |
*** Diddi has quit IRC | 17:03 | |
*** haypo has left #openstack-swift | 17:04 | |
*** mkollaro has quit IRC | 17:07 | |
portante | glange: do you mean the first GET should have 10MB instead of 2350? | 17:07 |
glange | yes | 17:07 |
portante | that first request does not appear to be an internal call | 17:08 |
portante | that looks like an external client call | 17:08 |
glange | it is | 17:08 |
notmyname | portante: wait, that first one should have 10MB or 100MB? | 17:08 |
portante | sory, 100MB | 17:08 |
notmyname | kinda important difference here ;-) | 17:09 |
portante | glange: was that a manifest get with the proper parameters? | 17:09 |
*** shri has joined #openstack-swift | 17:10 | |
dfg | notmyname: there's also another bug with the SLO refactor that a customer reported to us. | 17:10 |
*** mkollaro has joined #openstack-swift | 17:10 | |
glange | manifest request that intend to download the actual object should have the total bytes transfered for the segments and not the size of the manifest object | 17:11 |
creiht | notmyname: I'm back | 17:11 |
dfg | this xLO refactor into middleware has caused a lot of problems. they were complicated features to begin with... | 17:11 |
portante | glange: right, so when the GET does not have any parameters, that is a full object get using the manifest, right? | 17:11 |
notmyname | creiht: I was hoping you would talk to glange over lunch about this ;-) come on in and join the fun :-) | 17:11 |
glange | portante: yes | 17:12 |
notmyname | glange: portante: or put another way, a proxy-logging line without swift.source set should have the number of bytes actually sent | 17:12 |
creiht | notmyname: I'm WFH today | 17:12 |
glange | http://docs.openstack.org/developer/swift/overview_large_objects.html | 17:13 |
glange | I had to look it up but see the curl examples | 17:13 |
creiht | I'm sure it's in good hands :) | 17:13 |
glange | creiht: come on | 17:13 |
creiht | hehe | 17:13 |
portante | notmyname, glange I believe this change is an effect of the double logging, which we did not account for when we moved to middleware for slo/dlo | 17:14 |
portante | I am guessing that first GET is logged by the proxy_logging middleware closest to the proxy-server app | 17:14 |
portante | and, really, the other three as well | 17:15 |
dfg | portante: no its not | 17:15 |
*** nacim has quit IRC | 17:15 | |
portante | dfg: how can you tell it isn't? | 17:15 |
dfg | well- it worked with the double logging pre- refactor to middleware | 17:15 |
portante | that is because the proxy-logging saw the finall bytes pulled from all objects, no? | 17:16 |
dfg | with this pipeline: pipeline = healthcheck proxy-logging-l cache bulk ratelimit formpost tempurl slo tempauth account-quotas rackcdn staticweb proxy-logging proxy-server | 17:16 |
dfg | and this git checkout 7accddf1c3f54f67cf29d6eb69e416f798af6e23 everythign works fine | 17:16 |
portante | what is proxy-logging-l? | 17:17 |
dfg | just proxy-logging | 17:17 |
dfg | ignore rackcdn | 17:17 |
portante | but is it the same code as run by proxy-logging by proxy-server then? | 17:17 |
dfg | ya- i separated them out for dubugging when we split it out. it uses the same egg | 17:18 |
* portante looking at above commit ... | 17:18 | |
notmyname | dfg: that's a good idea :-) | 17:19 |
dfg | its just a commit pre-slo refactor | 17:19 |
dfg | the annoying thing is that i don't know why its not working- it should. the proxy-logging to the left of the slo should wrap the entire outgoing response- so get all the outgoing bytes. but its not working | 17:20 |
*** jairo has joined #openstack-swift | 17:20 | |
*** piyush has joined #openstack-swift | 17:21 | |
dfg | the other bug: https://bugs.launchpad.net/swift/+bug/1296941 is also totally weird | 17:21 |
dfg | and that needs to be fixed before next release too. | 17:21 |
portante | dfg: that above commit is Havana, no? | 17:22 |
dfg | i don't know if glange mentioned but this is just another in a series of bugs we've had with this refactor... | 17:22 |
portante | so how does the slo middelware work with that? | 17:22 |
jairo | hello guys.. I noticed that object-auditor only checks one drive at a time, is there a way to make it check simultaneously several drives? | 17:22 |
dfg | i can't keep track of all the names.. | 17:22 |
dfg | 1.10 i think | 17:22 |
dfg | milestone-1.10.0-rc1 | 17:23 |
dfg | portante: at that point the slo middleware only did anything on the building of the manifest. all outgoing stuff was handled in proxy server conde piggy backing off dlo code | 17:24 |
portante | okay, so then I am back to this being a logging issue | 17:24 |
dfg | proxy server code | 17:24 |
dfg | ya- it is a logging issue | 17:24 |
dfg | but began with refactor | 17:25 |
portante | proxy_logging closest to the proxy-server app is logging that GET, when it should be the porxy logging at the beginning of the pipeline | 17:25 |
portante | yes, my guess is that the refactoring did not take into account the subtlies of the dual proxy-logging middlewares | 17:25 |
dfg | it should do both | 17:25 |
portante | yes, with one marked as the internal and the other being the client one | 17:26 |
portante | dfg: is there a functional test that reproduces this problem? | 17:26 |
dfg | the one on right should log all the sub requests with SLO swift.source and report the size of the segemtns the proxy-logging-l should have the entire response with no swift.source | 17:26 |
portante | agreed | 17:27 |
dfg | portante: any func test the pulls out an SLO i don't know how the func test is going to look at the log line generated | 17:27 |
dfg | i missed a . in there somewhere | 17:27 |
portante | if you can point me at a test, I'll fix the logging | 17:28 |
portante | I would like to not spend time finding the test that reproduces it | 17:28 |
dfg | ok- one sec | 17:29 |
portante | thx | 17:29 |
portante | notmyname, dfg, glange is there a bug # for the logging issue? | 17:30 |
notmyname | portante: I haven't filed one and I only found this late yesterday afternoon | 17:31 |
portante | okay, thx | 17:31 |
notmyname | portante: I'll go do that now (unless you've already started) | 17:32 |
portante | please, thanks, I am looking at the code | 17:32 |
*** IRTermite has joined #openstack-swift | 17:32 | |
*** gyee has joined #openstack-swift | 17:32 | |
dfg | portante: anyway- just running this should reproduce it: python test/functional/tests.py TestSlo.test_slo_get_simple_manifest | 17:40 |
portante | dfg: great, thanks | 17:40 |
dfg | or you can just use swift-client to generate a slo (swift upload cont file -S 1000 --use-slo) and download it. you'll get a lot less logs to look at | 17:41 |
portante | k, thx | 17:43 |
*** lpabon has quit IRC | 17:44 | |
*** mlipchuk has joined #openstack-swift | 17:49 | |
notmyname | portante: my gist has client-side and logs | 17:50 |
creiht | notmyname: I'm going to poke at the other slo bug to see if I can figure out what is going on | 17:51 |
*** madhuri_ has quit IRC | 17:52 | |
*** piyush has quit IRC | 17:55 | |
*** mlipchuk has quit IRC | 17:57 | |
*** changbl has quit IRC | 18:00 | |
notmyname | creiht: thanks | 18:01 |
notmyname | I got pulled into a chat with joearnold. I'm typing up the logging bug now | 18:02 |
creiht | notmyname: tell him that you have some technology that you need to direct ;) | 18:03 |
*** lpabon has joined #openstack-swift | 18:04 | |
notmyname | lol | 18:04 |
*** mlipchuk has joined #openstack-swift | 18:12 | |
notmyname | portante: dfg: does this look correct? https://bugs.launchpad.net/swift/+bug/1297438 | 18:14 |
portante | looks okay to me | 18:16 |
portante | notmyname: | 18:16 |
notmyname | thanks | 18:16 |
*** zaitcev has joined #openstack-swift | 18:17 | |
*** ChanServ sets mode: +v zaitcev | 18:17 | |
dfg | notmyname: ya looks fine | 18:18 |
notmyname | dfg: thanks | 18:18 |
notmyname | dfg: and thanks for doing the extra work to confirm | 18:18 |
*** jasondotstar has quit IRC | 18:18 | |
notmyname | portante: between you and dfg, who is point on a patch for this? | 18:19 |
*** changbl has joined #openstack-swift | 18:19 | |
* notmyname goes to each lunch | 18:20 | |
*** jasondotstar has joined #openstack-swift | 18:21 | |
portante | dfg, are you working on a fix as well? | 18:22 |
portante | I am currently debugging how the proxy logging works in the face of slo | 18:23 |
*** shakamunyi has joined #openstack-swift | 18:25 | |
*** shakamunyi has quit IRC | 18:25 | |
jairo | do you know when the quarantined objects get reprocessed? I have a a bunch of files waiting but I don't see any progress, as per number of files. | 18:26 |
*** shakamunyi has joined #openstack-swift | 18:26 | |
*** shakamunyi has quit IRC | 18:26 | |
*** shakamunyi has joined #openstack-swift | 18:27 | |
*** shakamunyi has quit IRC | 18:27 | |
*** piyush has joined #openstack-swift | 18:41 | |
*** piyush1 has joined #openstack-swift | 18:43 | |
*** mmcardle1 has quit IRC | 18:44 | |
*** piyush has quit IRC | 18:45 | |
clayg | jairo: quarantined objects don't get "processed" automatically - and you shouldn't have a "bunch" of files? | 18:46 |
jairo | clayg at some point I had one of the nodes with the wrong sufix number, and that replicated files with the wrong path everywhere | 18:55 |
jairo | clayg my understanding is that the replication will check with the other nodes and find the correct version of the file and fix it, is there something I need to run to address this | 18:56 |
*** dmsimard1 has joined #openstack-swift | 19:00 | |
notmyname | /back | 19:00 |
dfg | portante: i'll work on it ina bit. gotta couple things have to do first | 19:00 |
*** dmsimard has quit IRC | 19:01 | |
portante | dfg: I think I understand what is happening | 19:01 |
creiht | notmyname: I think I have a fix for the range bug | 19:01 |
notmyname | yay :-) | 19:01 |
portante | it looks like there is one or more places where the environment is not being copied | 19:01 |
creiht | not sure there is an easy way to add a test though :/ | 19:02 |
portante | still tracking down where that is happening | 19:02 |
clayg | jairo: so you have one node taking down objects and hashing them into the wrong path... so then they get quarantined... as long as one copy went onto a node with the right suffix in the conf - yes replication will fix it | 19:02 |
*** dmsimard has joined #openstack-swift | 19:02 | |
creiht | heh | 19:02 |
clayg | jairo: but it won't clean up the garbage in quarantine dir | 19:02 |
creiht | dfg says it may fix the logging issue as well | 19:02 |
dfg | portante: ya- i'm hoping buth the bugs are related | 19:02 |
creiht | let me post a review | 19:02 |
jairo | clayg how would I clean that up?, safely | 19:03 |
notmyname | jairo: how many quarantined objects do you have? dozens? hundreds? millions? | 19:03 |
jairo | millions | 19:04 |
clayg | jairo: that's awesome | 19:04 |
notmyname | ok then | 19:04 |
*** dmsimard1 has quit IRC | 19:04 | |
portante | creiht, dfg, what have you found? | 19:04 |
dfg | i haven't found anything :) | 19:05 |
creiht | portante: pushing something shortly | 19:05 |
clayg | jairo: so you should use swift-object-info (or more likey some custom code based on what it does) to pull out the path (/account/container/object) from the quarantined object and then do a HEAD (look into swift.common.internal_client) and see if you get back a 2XX with a timestamp that matches | 19:05 |
portante | what I have found is that the proxy_logging closest to the proxy-server app is logging most of the requests, so it cannot know how to log the final responses | 19:05 |
portante | I have a fix for that alone | 19:05 |
clayg | jairo: if the cluster can respond successfully for the object then you can throw away the quarantined data | 19:06 |
clayg | jairo: if not - then there's more work todo - but you might see if you can get that "millions" down a bit before going into all that... | 19:06 |
openstackgerrit | Chuck Thier proposed a change to openstack/swift: Fix range requests with static large objects https://review.openstack.org/82895 | 19:07 |
creiht | Don't ask me *why* that fixes it :) | 19:07 |
jairo | clayg: ok, that soulds a bit painful but it is a start, I though the auditor was doing that, will it ever? | 19:07 |
creiht | I was just comparing slo and dlo | 19:07 |
creiht | I haven't been looking at the logging stuff, but would be interested to have someone try it and see if it fixes it | 19:08 |
creiht | ok dfg says it doesn't help the logging | 19:08 |
notmyname | jairo: clayg: that's exactly what I was thinking | 19:09 |
notmyname | jairo: the auditor creates the quarantine files. but it doesn't remove them | 19:09 |
clayg | jairo: nope auditor doesn't look at quarantines - plus that many HEAD requests is going to probably have some impact on the load of the system - so you might wanna keep an eye on that - trottle it to your needs. | 19:09 |
jairo | notmyname: clayg: cool... but definitely will be even cooler if you guys add it to the auditor server. | 19:12 |
clayg | jairo: idk about the auditor, but a quarantine check tool to demonstrate the basic pattern might be a useful script to document in bin/ - not sure in how many cases it would be generalizeable - I don't think I've heard much from other folks that suffered a misconfigured hash suffix... | 19:13 |
*** openstackgerrit has quit IRC | 19:18 | |
*** openstackgerrit has joined #openstack-swift | 19:18 | |
notmyname | clayg: jairo: what about locally copying the quarantined files back into the objects directory and letting replication/auditing take care of it like normal (not that the hash suffix is set properly)? then at that point, after the processes have finished, check to see if anything is quarantined and deal with it as normal | 19:18 |
jairo | notmyname: clayg: but the auditor put it there after I fixed the hash suffix, so it will just quarantine it again, would't it? | 19:21 |
portante | creiht: thanks, that seems a bit cryptic | 19:21 |
clayg | notmyname: maybe, depends how much replication has already moved the good bits about | 19:21 |
notmyname | jairo: ah, ok | 19:21 |
clayg | jairo: you wouldn't be putting it back where it was - you'd be putting it out where it goes - some fair about of path munging but you might be able to get it right | 19:22 |
clayg | I think it will be worth pursing if you have a lot of objects in quarantine that aren't availabe in the cluster otherwise - but my understanding is that you had multiple nodes and only one node was effected by the mis-config? | 19:23 |
jairo | I just checked one and it is replicated properly, so it would be safe to delete | 19:23 |
clayg | one down | 19:23 |
notmyname | 999999 more to go | 19:24 |
jairo | and yes it is correct the issue was in a node | 19:24 |
jairo | ;-) | 19:25 |
creiht | portante: heh yeah I know | 19:26 |
*** zul has quit IRC | 19:26 | |
portante | creiht: the work on running the functional tests in-process was motivated by the fact that I had a very hard time following the slo and dlo code changes to middleware | 19:27 |
portante | I wanted to run functional tests and very code paths, but it was too much of a pain to do with the functional tests in a SAIO | 19:27 |
creiht | heh | 19:27 |
portante | so I basically gave up on understanding SLO and DLO changes, because they were accepted before I could wrap my head around things | 19:27 |
creiht | yeah I don't completely grok all of the slo/dlo stuff either | 19:27 |
jairo | clayg: so the best bet here is for me to write a little tool to do the same I did manually, but automated...? | 19:28 |
portante | I don't *think* I understand the logging, and I hope to have a patch in a few | 19:28 |
clayg | jairo: it'll be fun - you'll see! | 19:28 |
*** zul has joined #openstack-swift | 19:30 | |
openstackgerrit | Chuck Thier proposed a change to openstack/swift: Fix range requests with static large objects https://review.openstack.org/82895 | 19:31 |
*** dmsimard1 has joined #openstack-swift | 19:34 | |
*** dmsimard has quit IRC | 19:36 | |
*** JuanManuelOlle has quit IRC | 19:37 | |
*** JuanManuelOlle has joined #openstack-swift | 19:38 | |
*** csd has quit IRC | 19:47 | |
*** shakayumi has joined #openstack-swift | 19:54 | |
*** shakamunyi has joined #openstack-swift | 19:56 | |
*** swills has left #openstack-swift | 19:56 | |
*** shakayumi has quit IRC | 19:58 | |
*** changbl has quit IRC | 20:00 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Tests for storage policies in /info https://review.openstack.org/82906 | 20:00 |
*** csd has joined #openstack-swift | 20:01 | |
openstackgerrit | Clay Gerrard proposed a change to openstack/swift: Bump up sleep when expecting a timeout https://review.openstack.org/82664 | 20:01 |
*** shakamunyi has quit IRC | 20:03 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Add note to sample conf about policies being experimental https://review.openstack.org/82907 | 20:06 |
*** shakamunyi has joined #openstack-swift | 20:06 | |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Invert which proxy logging middleware instance logs https://review.openstack.org/82909 | 20:08 |
portante | notmyname, dfg, creiht: but the above seems to address the issue | 20:09 |
portante | I have to work on unit tests and functional tests to ensure this behavior change, but it would be great to have folks try this out and see how it works | 20:09 |
notmyname | portante: confirmed that it fixes the reported issue. I haven't run any other tests | 20:16 |
portante | notmyname: it does not do one thing, log the GET of the manifest itself separate from the client GET of that object | 20:17 |
portante | that will take another change to slo, and possibly dlo as well, to make that behavior happen | 20:17 |
notmyname | portante: I'm not sure that's needed for now. from what I understood from dfg, the original behavior didn't log the manifest fetch either | 20:17 |
portante | yes, the dual proxy_logging was kinda broken | 20:18 |
notmyname | portante: ie, that sounds nice, but fixing the logging bug w.r.t. the empty swift.source lines is more important | 20:18 |
portante | certainly | 20:18 |
portante | I am not planning on doing that work, unless somebody asks. :) | 20:19 |
*** jasondotstar has quit IRC | 20:20 | |
notmyname | dfg: glange: can you confirm that portante's patch results in a restoration of the old behavior? | 20:21 |
*** sriram has quit IRC | 20:22 | |
*** mkollaro has quit IRC | 20:23 | |
notmyname | portante: I checked out 1.12.0 (and took dlo out of the pipeline). here are the logs: https://gist.github.com/notmyname/29ed027c500697849985 | 20:29 |
notmyname | portante: which actually looks worse | 20:29 |
*** dmsimard1 is now known as dmsimard | 20:31 | |
notmyname | portante: and with 1.11.0 (after also removing gatekeeper): https://gist.github.com/notmyname/e588c7d0ca038e5f9503 | 20:32 |
*** mkollaro has joined #openstack-swift | 20:32 | |
notmyname | portante: all of those are with SLO | 20:34 |
portante | hmm, looking | 20:34 |
notmyname | portante: so in all of those cases, your patch looks better, IMO | 20:35 |
portante | yes, I agree | 20:35 |
portante | notmyname: and this patch removes one more iterator processing responses, which might be a help | 20:36 |
*** Trixboxer has quit IRC | 20:36 | |
*** mmcardle has joined #openstack-swift | 20:37 | |
*** tdasilva has left #openstack-swift | 20:41 | |
*** changbl has joined #openstack-swift | 20:51 | |
*** csd has quit IRC | 20:55 | |
*** csd has joined #openstack-swift | 20:57 | |
*** mmcardle has quit IRC | 21:00 | |
*** JuanManuelOlle has quit IRC | 21:00 | |
*** judd7_ has joined #openstack-swift | 21:04 | |
openstackgerrit | John Dickinson proposed a change to openstack/swift: fix a skipped account ACLs functional test https://review.openstack.org/82922 | 21:07 |
*** shakayumi has joined #openstack-swift | 21:08 | |
*** shakayumi has quit IRC | 21:09 | |
*** shakamunyi has quit IRC | 21:11 | |
* notmyname has to step out for a while | 21:12 | |
notmyname | I'll check in later this afternoon/evening | 21:12 |
* portante will be dropping off in about 45 minutes | 21:14 | |
*** lpabon has quit IRC | 21:25 | |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Consolidate and reuse exception classes https://review.openstack.org/82925 | 21:26 |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Attempt to narrow race conditions in DB connect https://review.openstack.org/82689 | 21:26 |
openstackgerrit | Peter Portante proposed a change to openstack/swift: Make initialization test failure more explicit https://review.openstack.org/82688 | 21:26 |
openstackgerrit | Peter Portante proposed a change to openstack/swift: In-process swift server for functional tests https://review.openstack.org/66108 | 21:30 |
*** Alex_Gaynor has quit IRC | 21:39 | |
dfg | portante: that looks good- thanks for taking care of that. | 21:40 |
*** Alex_Gaynor has joined #openstack-swift | 21:41 | |
creiht | dfg: I'm having difficulty figuring out a way to test the slo change | 21:41 |
creiht | I may need some assistance tomorrow :) | 21:41 |
*** ryao_ has joined #openstack-swift | 21:42 | |
*** jeblair_ has joined #openstack-swift | 21:42 | |
*** jeblair_ is now known as corvus | 21:43 | |
portante | creiht: functional tests! :) | 21:44 |
portante | dfg: that change has an implied behavior change as well | 21:44 |
portante | it is now logging the time the entire pipeline after the left most proxy_logging gets a hold of the request | 21:45 |
portante | in the past, only the timing reported was only for the right-most proxy_logging in many cases | 21:45 |
portante | not sure that will be a material difference, but just to be aware of it | 21:45 |
*** ryao has quit IRC | 21:45 | |
*** jeblair has quit IRC | 21:45 | |
*** corvus is now known as jeblair | 21:45 | |
*** StevenK has quit IRC | 21:47 | |
*** russellb has quit IRC | 21:47 | |
*** zigo has quit IRC | 21:47 | |
*** StevenK has joined #openstack-swift | 21:47 | |
*** russellb has joined #openstack-swift | 21:48 | |
*** zigo has joined #openstack-swift | 21:49 | |
*** Midnightmyth has quit IRC | 21:58 | |
dfg | portante: ya- that sounds fine. as long as the total request time is the entire duration of the request (as opposed to the object-server) i don't think that matters | 22:04 |
dfg | although it would be neat to see how much time is spent walkign through all that middleware | 22:05 |
*** piyush1 has left #openstack-swift | 22:08 | |
*** changbl has quit IRC | 22:17 | |
*** byeager has quit IRC | 22:25 | |
*** byeager has joined #openstack-swift | 22:25 | |
*** byeager has quit IRC | 22:30 | |
notmyname | /back | 22:41 |
*** dmsimard has quit IRC | 22:42 | |
h6w | Morning all. :-) | 22:44 |
*** jergerber has quit IRC | 22:44 | |
* h6w is feeling very proud this morning after getting his first global swift cluster working yesterday. | 22:46 | |
notmyname | cool! | 22:46 |
h6w | Thanks to everyone, but particularly notmyname! :-D | 22:46 |
notmyname | looks like the Atlanta schedule has been posted http://openstacksummitmay2014atlanta.sched.org | 22:51 |
portante | dfg: it was not accounting for the middleware before, now it is, that is the change | 22:51 |
portante | I need to write a functional test, or maybe a unit test, that shows that | 22:51 |
notmyname | portante: which patch? (I closed my IRC client and lost the buffer from earlier today) | 22:53 |
portante | ahh, sec | 22:55 |
portante | notmyname: https://review.openstack.org/82909 | 22:56 |
notmyname | thanks | 22:56 |
portante | gotta step away a bit | 22:56 |
portante | it needs unit tests so that we can check the proxy_logging is properly working when two proxy_logging middlewares are in place | 22:57 |
portante | bbiab | 22:57 |
openstackgerrit | John Dickinson proposed a change to openstack/swift: fix a skipped account ACLs functional test https://review.openstack.org/82922 | 22:57 |
notmyname | clayg: done | 22:57 |
*** byeager has joined #openstack-swift | 23:08 | |
*** erlon has quit IRC | 23:12 | |
*** ryao_ has quit IRC | 23:20 | |
*** ryao_ has joined #openstack-swift | 23:20 | |
*** ryao_ is now known as ryao | 23:21 | |
*** krtaylor has quit IRC | 23:23 | |
*** byeager has quit IRC | 23:23 | |
*** byeager has joined #openstack-swift | 23:24 | |
*** byeager has quit IRC | 23:28 | |
*** byeager has joined #openstack-swift | 23:30 | |
*** mkollaro has quit IRC | 23:30 | |
*** mkollaro has joined #openstack-swift | 23:31 | |
*** occupant has quit IRC | 23:32 | |
notmyname | clayg: can you do the clicky on https://review.openstack.org/#/c/82922/ ? | 23:54 |
clayg | notmyname: not just right at this moment | 23:58 |
notmyname | clayg: no! do it now! ;-) | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!