22:02:49 #startmeeting db 22:02:50 Meeting started Thu Feb 7 22:02:49 2013 UTC. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:02:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:02:53 The meeting name has been set to 'db' 22:03:30 (kinda lost my train of thought with the break, heh) 22:03:51 i have 2 separate issues 22:03:56 though somewhat related 22:04:12 but I dunno what the agenda here is.. 22:04:34 so i didn't have an agenda for today 22:04:40 ok 22:04:49 last meeting, we went over the status of all the db BP's, which was good, but i dont think we need to rehash that 22:05:10 the problems you're seeing with pooling is pretty big IMO 22:05:17 especially how that will affect conductor 22:05:33 they've uncovered at least one problem for sure, one with which you were already aware, I think 22:05:36 yeah 22:05:53 i'll recap 22:06:01 starting with this first issue 22:06:57 well, first... 22:07:20 With the move to nova-conductor... we're now pushing DB queries to a single node (or set of nodes if you run multiple nova-conductors)... 22:07:31 but in a large deployment, you have much less conductors than computes, likely. 22:07:35 You don't want to have 1 per compute :) 22:08:04 This means that DB pooling becomes important... as without it, DB calls block the whole python process. 22:08:40 The RPC queue will back up because the service cannot keep up with all of the DB requests if it has to block on every single one without any sort of parallelism 22:09:00 Good so far? 22:09:05 yep 22:09:23 Cools... so I've been doing testing with db pooling 22:09:45 With the trunk code, under load, you can get some connect timeouts, etc. This patch is needed: 22:09:47 https://review.openstack.org/#/c/21461/ 22:09:57 which changes how the thread pooling is done 22:10:15 After applying that... 22:10:34 issue #1) innodb reporting Deadlock 22:11:07 Seems that DB pooling start working=) 22:11:08 when you get 2 bw_usage_update() requests in parallel for the same instance uuid (can be different mac addresses/networks), this happens 22:11:19 This is due to doing UPDATE+INSERT in a single transaction 22:11:53 iirc, it should also deadlock the first time any instance is provisioned for a new tenant, if that happens in parallel. 22:12:05 and in a few places around fixed / floating IP allocation, if they happen fast enough in parallel 22:12:21 Yeah, you had a list that reported a number of other spots where this could happen 22:12:31 besides bw_usage_update() 22:12:40 right. there are also some open bugs on LP about these now 22:13:13 In any case... We've found that bw_usage_cache table should have a unique index on instance_uuid+mac+start_period, which it doesn't right now. 22:13:29 And that would allow us to do things like INSERT ON DUP KEY UPDATE 22:13:40 if we can find a portable way to do it with sqlalchemy :) 22:14:37 sounds like we should add the index regardless, while we hunt for the portable upsert 22:14:42 I tested a variation on this that removed the Deadlock 22:14:49 yes 22:15:11 I think we can solve that issue without *too* much trouble, however 22:15:13 issue #2: 22:15:53 Even after getting innodb Deadlocks to go away, I'm seeing queries getting stuck and timing out due to waiting on innodb locks. 22:16:23 The only conclusion is that sqlalchemy is not committing a transaction... it's holding it somewhere 22:16:26 i dug through my notes and remembered that i found this issue a while back, too 22:16:34 https://bugs.launchpad.net/nova/+bug/1007038/comments/4 22:16:36 Launchpad bug 1007038 in nova "Nova is issuing unnecessary ROLLBACK statements to MySQL" [Low,Confirmed] 22:16:40 but I can only see this when thread pooling is on 22:16:58 devananda: I was able to cause an extra rollback very easily.. 22:17:05 kind of a side toopic 22:17:08 topic 22:17:08 comstud: it was happening in august last year. was thread pooling in at that time? 22:17:12 no 22:17:28 it seems to be our wrapping of DB exceptions 22:17:30 rather. it was happening when I disabled pool_reset_on_return 22:17:34 from what I could tell 22:17:48 hmmm 22:17:50 i had code such as this: 22:17:55 with session.begin(): 22:18:07 try: 22:18:09 do a query 22:18:18 except DBDuplicateEntry: 22:18:21 pass 22:18:34 and I noticed that 2 rollbacks would happen if there was an exception 22:18:43 and there'd be a error logged about it 22:18:48 kinda makes sense 22:18:59 sqla is probably doing a rollback before the exception bubbles up to your code 22:19:04 it seems like a rollback would happen when the exception occurred 22:19:13 and then also when we exited the context manager. 22:19:22 and then after you leave the with session context, another implicit rollback happens (because of pool_reset_on_return) 22:19:25 right 22:19:25 yep! 22:19:27 exactly 22:19:40 that's a side issue, I think 22:19:57 i'm concerned about where sqlalchemy seems to be holding a transaction 22:20:04 yea. me too 22:20:07 causing these lock wait timeouts 22:20:18 I can cause the whole process to stall by slamming DB requests 22:20:33 after meetings are done today, I was going to debug with eventlet backdoor 22:21:17 anyway, that's where I am with 'bugs' that show up with db pooling. 22:21:45 separate issue, somewhat related: 22:22:08 sqlalchemy is extremely slow compared to raw queries with MySQLdb. 22:22:14 And there are no lock wait timeouts, either 22:22:15 :) 22:22:35 :) 22:22:37 comstud: I think that's only with the ORM. Low-level sqla is pretty fast IMX 22:22:48 it very well might be, I just haven't tested it yet 22:22:57 belliott and I have been working on this together 22:23:02 by raw queries, you mean model.insert? or session.execute("INSERT ... INTO table") 22:23:27 just want to be clear 22:23:31 mysqldb .cursor(....) 22:23:35 the fast ones are session.execute, but it could be the Python insert() not necessaily the string insert 22:24:01 comstud: please confirm for yourself, but I want to make sure we don't throw out the baby with the bathwater 22:24:23 cursor.execute('UPDATE bw...') 22:24:28 my experience is that low-level SQLAlchemy is an excellent portability shim, and high-level sqla ORM doesn't scale to big projects 22:24:38 dripton: I'm not proposing we do. I'm just stating what I've seen so far 22:24:42 ok 22:24:53 "doesn't scale" sums up my exp with all ORMs 22:24:56 +1 22:25:01 it was just a quick hack to test something 22:25:04 but I haven't tried all of them so I'm being nice 22:25:24 wanted to eliminate innodb just being stupid somehow 22:25:30 yep 22:25:33 one layer at a time 22:26:00 would it be viable to replace just the areas that are likely to deadlock with raw sql? 22:26:21 I think we'd end up replacing entire api calls that have problems 22:26:26 i was thinking about that yes 22:26:33 ie, anywhere that currently uses with_lockmode or select-then-update-or-insert 22:26:55 and yea, that does mean several api calls get entirely rewritten to avoid locking issues 22:26:56 but I think all of the calls potentially have this 'lock wait timeout' issue 22:27:13 because it's a sqlalchemy orm layer problem 22:27:14 comstud +1 22:27:23 with db pooling anyway 22:27:25 well 22:27:29 We can start with a couple that are known bad then start attacking en masse if it works well 22:27:31 while i agree with that 22:27:40 but in particular, I've seen it with 3 or 4 calls 22:27:45 if we replaced all the ORM code that touches table X with raw sql, that didn't have locking problems 22:27:48 they just happen to be the most common calls that cells uses. 22:27:59 so that doesn't eliminate any of the others 22:28:15 true. but lock waits and deadlocks are table specific 22:28:25 it's not like holding on lock on table_a causes a problem for table_b 22:28:33 sure sure 22:28:34 i know 22:28:42 but i'd expect that it could happen on any table 22:28:42 granted, there _is_ a larger problem with sqla here, which i'd love to see fixed :) 22:28:53 yea 22:28:54 ah. so I wouldn't 22:29:07 afaict it will only happen when the ORM is trying to pre-emptively lock rows 22:29:20 ie, with_lockmode or SELECT-then-INSERT 22:29:26 there aren't that many places which do that 22:29:35 but if there's somehow an implicit transaction for all of our DB api calls, does that not mean there's locking of some sort on every table? 22:30:00 well, yes, but no :) 22:30:08 i was able to see this problem even after removing session.begin() 22:30:12 for instance 22:30:21 select 1; begin; update table ... where ..; commit; rollback; 22:30:23 (the lock wait timeouts, not the Deadlock) 22:30:28 that's the usual sqla query pattern for a single update 22:30:34 But if we change an API call to not use the ORM then that should go away, right? 22:31:36 We can put sqla in trace mode to see all the commands it sends to the DB, and verify that simple stuff doesn't do any extra transaction stuff. 22:31:36 if sqla somehow forgets to do the final "commit; rollback;" then yea, the lock wait problem could happen anywhere 22:31:53 nod 22:32:03 atm, that's what I suspect somewhere.. but I don't know for sure 22:32:09 k 22:32:14 possible i'm wrong and it's only happening in certain calls 22:32:35 dunno! 22:32:53 i wonder whether it's related to eventlet? 22:33:02 but I know we're approaching the limits here at RAX in global-cells 22:33:07 with all of the DB calls it has to do 22:33:13 so I need Db pooling RSN 22:33:20 devananda: me too 22:33:36 I may have to resort to raw queries for things at least internally if I can't figure this out quickly 22:33:53 and I know this same problem will show up in conductor under load 22:34:05 (we're still using local conductor only) 22:34:37 i'm going to attempt to find the sqlalchemy issue first 22:34:44 sounds good 22:34:44 and if that takes too long, look at low level sqlalchemy calls 22:34:49 ++ 22:35:33 i wouldn't mind seeing low level sqla calls upstream, esp considering it's better for performance 22:35:39 I will have a work-in-progress patch containing low-level sqla up soon, if you need a reference. It doesn't quite work yet. 22:35:55 i could def use a reference 22:36:06 I'll ping you when I upload it. 22:36:10 or.. it'd save some googling at least :) 22:36:15 hehe ty 22:36:59 Here's my fundamental problem with db-archiving: my current api call is to move *all* deleted rows to shadow tables in one call. That could take a long time. I don't know a clever way to subdivide the job. 22:37:26 comstud: would it be possible to disable eventlet in conductor, while still enabling tpool? 22:37:44 probably a crazy idea ... 22:38:02 dripton: ORDER BY id LIMIT X; 22:38:05 tpool is implemented in eventlet 22:38:10 eventlet.tpool 22:38:13 hah. yep, crazy idea 22:38:43 we could do our own threading, i suppose.. but you lose pseudo-parallelising pulling off the rabbit queue 22:39:02 maybe it's a win in the end, though, i dunno 22:39:14 i suspect kinda not 22:39:17 but i dunno! 22:39:25 devananda: yes, that works within one table. I'm worried about cross-table issues due to foreign keys, but we don't have many so maybe it's okay. 22:39:42 dripton: there shouldn't be any FKs in production IMNSHO .... 22:40:00 dripton: but even so, i'm not sure how that would matter 22:40:11 dripton: loop on one table, in small chunks, until it's finished. then move to next table 22:40:23 make sure each chunk is a separate transaction, too 22:40:27 devananda +1 22:40:44 otherwise you can blow away the innodb undo space, and it will stall replication, etc... 22:40:47 devananda: sure, FKs would just impose an ordering on how to move things. But, yeah, I'll do a bunch of little transactions and have a row limit per call to the api 22:41:45 i'm concerned about down time for large deployments 22:41:53 with the changes to soft delete 22:41:55 the deleted column 22:42:18 maybe we just prune the tables first if we don't care about archiving 22:42:38 comstud: you mean downtime when running the archiving operation? 22:42:45 DB migration 22:42:47 in general 22:42:48 comstud: or when migrating the deleted column? 22:42:50 ah 22:42:52 yeah that one 22:42:55 we ran a test... 22:43:13 we took the proposed migration and ran it against a copy of the DB 22:43:18 it took about 45 minutes IIRC 22:43:18 :) 22:43:23 =))) 22:43:26 i would assume that deployuers are not going to just run all the migrations blindly, but we should include a note about the larger migrations 22:43:36 +1 22:43:36 comstud: 45min is nothing for a big ALTER TABLE ;) 22:43:52 yeah, but unfort due to use of sqlalchemy orm... 22:43:53 alter table for all tables=) 22:43:55 comstud: I was planning to include another API call to nuke deleted rows 22:43:57 it means shit is broken during the whole migration 22:44:16 sure. which is why you have two db's & HA 22:44:17 :) 22:44:17 i think we just prune our tables first 22:44:32 or that, hehe 22:45:55 i gotta drop off.. anything else for me? 22:46:00 anyone have other topics to bring up? it's seeming like we're about done 22:46:06 * comstud waits 22:46:18 I'm done 22:46:22 cool 22:46:26 thanks guys :) 22:46:30 thanks for the tip devananda 22:46:34 #endmeeting