*** sshank has quit IRC | 00:03 | |
*** dayou has quit IRC | 00:07 | |
*** dayou has joined #openstack-lbaas | 00:08 | |
*** cpusmith has quit IRC | 00:11 | |
openstackgerrit | Jude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers https://review.openstack.org/486775 | 00:17 |
---|---|---|
openstackgerrit | Jude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for pools https://review.openstack.org/492673 | 00:17 |
openstackgerrit | Jude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for listeners https://review.openstack.org/492311 | 00:17 |
openstackgerrit | xuchaochao proposed openstack/neutron-lbaas master: Add a compatible check before creating pool https://review.openstack.org/492357 | 01:36 |
*** fnaval has joined #openstack-lbaas | 01:55 | |
*** fnaval_ has joined #openstack-lbaas | 02:02 | |
*** fnaval has quit IRC | 02:04 | |
*** sanfern has joined #openstack-lbaas | 02:07 | |
*** ianychoi_ has joined #openstack-lbaas | 02:11 | |
*** sanfern has quit IRC | 02:17 | |
*** gongysh has joined #openstack-lbaas | 02:17 | |
*** ianychoi has quit IRC | 02:18 | |
*** dayou has quit IRC | 02:18 | |
*** openstackgerrit has quit IRC | 02:21 | |
*** armax has quit IRC | 02:21 | |
*** m-greene_ has quit IRC | 02:21 | |
*** armax has joined #openstack-lbaas | 02:21 | |
*** xingzhang has joined #openstack-lbaas | 02:22 | |
*** m-greene_ has joined #openstack-lbaas | 02:24 | |
*** dayou has joined #openstack-lbaas | 02:32 | |
rm_work | xgerman_: want to https://review.openstack.org/#/c/492233/ | 02:48 |
rm_work | johnsom: ^^ cut release after that merges? | 02:51 |
johnsom | Ok, yeah, I think we now have it covered, just waiting on merges. | 02:52 |
johnsom | rm_work Thanks! | 02:52 |
*** yamamoto has quit IRC | 02:52 | |
*** yamamoto has joined #openstack-lbaas | 02:53 | |
johnsom | (working on date night to get this in, so far she hasn't figured it out) | 02:56 |
johnsom | Downside is watching cheesy shows to keep her distracted | 02:57 |
*** openstackgerrit has joined #openstack-lbaas | 02:59 | |
openstackgerrit | Merged openstack/neutron-lbaas-dashboard master: Imported Translations from Zanata https://review.openstack.org/492448 | 02:59 |
rm_work | lol | 03:00 |
rm_work | heading out myself | 03:00 |
*** xingzhang_ has joined #openstack-lbaas | 03:05 | |
*** xingzhang has quit IRC | 03:08 | |
*** gongysh has quit IRC | 03:34 | |
*** sanfern has joined #openstack-lbaas | 03:39 | |
*** aojea has joined #openstack-lbaas | 03:41 | |
*** aojea has quit IRC | 03:46 | |
openstackgerrit | Merged openstack/octavia master: Fix LB creation with VIP port https://review.openstack.org/492649 | 03:48 |
*** gans has joined #openstack-lbaas | 03:52 | |
openstackgerrit | Merged openstack/octavia master: Update devstack readme.md https://review.openstack.org/492233 | 03:58 |
johnsom | Wahoo, RC1 release patch is up for review | 04:02 |
*** xingzhang_ has quit IRC | 04:23 | |
*** xingzhang has joined #openstack-lbaas | 04:24 | |
*** gans819 has joined #openstack-lbaas | 04:25 | |
*** gans has quit IRC | 04:28 | |
*** sanfern has quit IRC | 04:56 | |
*** sanfern has joined #openstack-lbaas | 05:03 | |
*** yamamoto has quit IRC | 05:03 | |
*** yamamoto has joined #openstack-lbaas | 05:04 | |
*** yamamoto has quit IRC | 05:11 | |
*** xingzhang has quit IRC | 05:30 | |
*** xingzhang has joined #openstack-lbaas | 05:30 | |
*** gongysh has joined #openstack-lbaas | 05:43 | |
*** armax has quit IRC | 05:51 | |
*** armax has joined #openstack-lbaas | 05:52 | |
*** armax has quit IRC | 05:52 | |
*** armax has joined #openstack-lbaas | 05:52 | |
*** armax has quit IRC | 05:53 | |
*** armax has joined #openstack-lbaas | 05:53 | |
*** armax has quit IRC | 05:54 | |
*** armax has joined #openstack-lbaas | 05:54 | |
*** armax has quit IRC | 05:54 | |
*** armax has joined #openstack-lbaas | 05:55 | |
*** armax has quit IRC | 05:55 | |
*** armax has joined #openstack-lbaas | 05:56 | |
*** armax has quit IRC | 05:56 | |
*** armax has joined #openstack-lbaas | 05:57 | |
*** armax has quit IRC | 05:57 | |
*** armax has joined #openstack-lbaas | 05:57 | |
*** armax has quit IRC | 05:58 | |
*** sanfern has quit IRC | 05:59 | |
*** tesseract has joined #openstack-lbaas | 06:16 | |
*** yamamoto has joined #openstack-lbaas | 06:19 | |
*** rcernin has joined #openstack-lbaas | 06:22 | |
*** yamamoto has quit IRC | 06:24 | |
*** rtjure has joined #openstack-lbaas | 06:32 | |
*** sanfern has joined #openstack-lbaas | 06:33 | |
*** rajivk has quit IRC | 06:33 | |
*** sanfern has quit IRC | 06:39 | |
*** ajo has joined #openstack-lbaas | 06:44 | |
*** rajivk has joined #openstack-lbaas | 06:45 | |
*** sanfern has joined #openstack-lbaas | 06:46 | |
*** yamamoto has joined #openstack-lbaas | 06:52 | |
*** amotoki has joined #openstack-lbaas | 06:57 | |
*** yamamoto has quit IRC | 07:07 | |
*** openstackgerrit has quit IRC | 08:02 | |
*** yamamoto has joined #openstack-lbaas | 08:04 | |
*** dayou has quit IRC | 08:17 | |
*** yamamoto has quit IRC | 08:17 | |
*** openstackgerrit has joined #openstack-lbaas | 08:21 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id https://review.openstack.org/458308 | 08:21 |
*** Alex_Staf has joined #openstack-lbaas | 08:25 | |
openstackgerrit | OpenStack Release Bot proposed openstack/neutron-lbaas master: Update reno for stable/pike https://review.openstack.org/492872 | 08:28 |
openstackgerrit | OpenStack Release Bot proposed openstack/octavia master: Update reno for stable/pike https://review.openstack.org/492875 | 08:28 |
*** yamamoto has joined #openstack-lbaas | 08:44 | |
*** yamamoto has quit IRC | 08:56 | |
*** sanfern has quit IRC | 09:01 | |
*** sanfern has joined #openstack-lbaas | 09:01 | |
*** sanfern has quit IRC | 09:02 | |
*** sanfern has joined #openstack-lbaas | 09:02 | |
*** sanfern has quit IRC | 09:02 | |
*** sanfern has joined #openstack-lbaas | 09:03 | |
*** sanfern has quit IRC | 09:03 | |
*** sanfern has joined #openstack-lbaas | 09:04 | |
*** sanfern has quit IRC | 09:04 | |
*** sanfern has joined #openstack-lbaas | 09:04 | |
*** sanfern has quit IRC | 09:05 | |
*** yamamoto has joined #openstack-lbaas | 09:24 | |
*** yamamoto has quit IRC | 09:35 | |
*** ianychoi_ is now known as ianychoi | 09:36 | |
*** Alex_Staf has quit IRC | 10:01 | |
*** Alex_Staf has joined #openstack-lbaas | 10:15 | |
*** yamamoto has joined #openstack-lbaas | 10:32 | |
*** xingzhang has quit IRC | 10:46 | |
*** xingzhang has joined #openstack-lbaas | 10:46 | |
*** yamamoto has quit IRC | 10:48 | |
*** xingzhang has quit IRC | 10:59 | |
*** gans819 has quit IRC | 11:05 | |
*** gongysh has quit IRC | 11:17 | |
*** ajo has quit IRC | 11:22 | |
*** m-greene_ has quit IRC | 11:27 | |
*** m-greene_ has joined #openstack-lbaas | 11:27 | |
*** yamamoto has joined #openstack-lbaas | 11:35 | |
*** yamamoto has quit IRC | 11:48 | |
*** yamamoto has joined #openstack-lbaas | 11:48 | |
*** yamamoto has quit IRC | 11:55 | |
*** yamamoto has joined #openstack-lbaas | 11:57 | |
*** dasanind has quit IRC | 12:19 | |
*** zioproto has quit IRC | 12:19 | |
*** amitry has quit IRC | 12:19 | |
*** amitry has joined #openstack-lbaas | 12:19 | |
*** zioproto has joined #openstack-lbaas | 12:19 | |
*** dasanind has joined #openstack-lbaas | 12:20 | |
*** gongysh has joined #openstack-lbaas | 12:20 | |
*** gongysh has quit IRC | 12:20 | |
*** rtjure has quit IRC | 12:23 | |
*** rtjure has joined #openstack-lbaas | 12:26 | |
*** catintheroof has joined #openstack-lbaas | 12:30 | |
*** gongysh has joined #openstack-lbaas | 12:35 | |
*** sanfern has joined #openstack-lbaas | 12:35 | |
*** leitan has joined #openstack-lbaas | 13:12 | |
*** leyal has quit IRC | 13:18 | |
*** leyal has joined #openstack-lbaas | 13:18 | |
*** ajo has joined #openstack-lbaas | 13:19 | |
*** cpusmith has joined #openstack-lbaas | 13:34 | |
*** cpusmith_ has joined #openstack-lbaas | 13:36 | |
*** cpusmith has quit IRC | 13:40 | |
*** Alex_Staf has quit IRC | 13:51 | |
*** Alex_Staf has joined #openstack-lbaas | 13:53 | |
*** mdavidson has quit IRC | 14:26 | |
*** ajo has quit IRC | 14:30 | |
*** armax has joined #openstack-lbaas | 14:31 | |
*** mdavidson has joined #openstack-lbaas | 14:35 | |
*** fnaval_ has quit IRC | 14:38 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Update reno for stable/pike https://review.openstack.org/492875 | 14:47 |
*** rcernin has quit IRC | 14:57 | |
*** fnaval has joined #openstack-lbaas | 15:00 | |
*** xingzhang has joined #openstack-lbaas | 15:04 | |
*** xingzhang has quit IRC | 15:09 | |
*** armax has quit IRC | 16:04 | |
*** gongysh has quit IRC | 16:19 | |
*** amotoki is now known as amotoki__away | 16:30 | |
*** ajo has joined #openstack-lbaas | 16:40 | |
*** tesseract has quit IRC | 16:41 | |
*** ajo has quit IRC | 17:04 | |
*** sanfern has quit IRC | 17:06 | |
*** ajo has joined #openstack-lbaas | 17:09 | |
*** ajo has quit IRC | 17:11 | |
openstackgerrit | Merged openstack/octavia master: Update reno for stable/pike https://review.openstack.org/492875 | 17:38 |
*** sshank has joined #openstack-lbaas | 17:39 | |
*** yamamoto has quit IRC | 17:43 | |
*** sshank has quit IRC | 17:44 | |
*** ajo has joined #openstack-lbaas | 17:46 | |
*** sshank has joined #openstack-lbaas | 17:55 | |
*** Alex_Staf has quit IRC | 18:00 | |
*** leitan has quit IRC | 18:12 | |
johnsom | Infra is still having problems. Our release notes are borked | 18:14 |
johnsom | http://logs.openstack.org/69/690ccfd43fb141c26652e119f1d702b65414a194/post/octavia-releasenotes/238e6ab/console.html#_2017-08-11_17_50_10_439343 | 18:14 |
xgerman_ | yeah, my OSA thing is stuck, too | 18:18 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: DO NOT MERGE: Testing log retrieval from amphora https://review.openstack.org/491997 | 18:34 |
*** yamamoto has joined #openstack-lbaas | 18:44 | |
*** yamamoto has quit IRC | 18:49 | |
*** sshank has quit IRC | 19:08 | |
*** leitan has joined #openstack-lbaas | 19:10 | |
*** sshank has joined #openstack-lbaas | 19:22 | |
*** sshank has quit IRC | 19:26 | |
*** leitan_ has joined #openstack-lbaas | 19:31 | |
*** leitan has quit IRC | 19:31 | |
*** leitan_ has quit IRC | 19:33 | |
*** gcheresh has joined #openstack-lbaas | 19:41 | |
*** yamamoto has joined #openstack-lbaas | 19:46 | |
*** leitan has joined #openstack-lbaas | 19:50 | |
*** yamamoto has quit IRC | 19:55 | |
*** leitan_ has joined #openstack-lbaas | 20:00 | |
*** leitan has quit IRC | 20:03 | |
openstackgerrit | Merged openstack/python-octaviaclient master: Lowercase vip_Address in return https://review.openstack.org/492330 | 20:09 |
*** catintheroof has quit IRC | 20:15 | |
*** leitan_ has quit IRC | 20:49 | |
*** sshank has joined #openstack-lbaas | 21:01 | |
*** gcheresh has quit IRC | 21:08 | |
*** atoth has quit IRC | 21:25 | |
rm_work | xgerman_ / johnsom: periodically i'm seeing this, either of you see it? | 21:56 |
rm_work | http://paste.openstack.org/show/618231/ | 21:56 |
rm_work | when it happens, it seems to happen on every one of the HM processes running | 21:56 |
johnsom | Every one? | 21:57 |
rm_work | i mean, it happens once | 21:57 |
xgerman_ | nope, having seen that so far | 21:57 |
rm_work | and it prints that on all 6 of the HM processes I'm running | 21:57 |
johnsom | I wonder if it is a side effect of having > 1 o-hm | 21:57 |
rm_work | like ... all 6 of them tried to do the same lokc | 21:57 |
johnsom | I wonder if it is a side effect of having > 1 o-hk | 21:57 |
rm_work | I do have exactly one o-hk actually | 21:58 |
rm_work | but 6 o-hm | 21:58 |
johnsom | Oh, sorry, right o-hm | 21:58 |
johnsom | Still context switching here | 21:58 |
rm_work | heh | 21:58 |
johnsom | Just about have cracked the OSC plugin error issue | 21:58 |
rm_work | ah nice | 21:59 |
rm_work | godspeed | 21:59 |
johnsom | Mostly fine tuning at this point | 22:00 |
johnsom | So, hmmm, that should be a fairly short lived query for update.... | 22:01 |
johnsom | I wonder what your lock timeout is.... | 22:01 |
rm_work | yeah, wonder if the DB being slow for some reason could do it | 22:01 |
rm_work | i can find out | 22:01 |
rm_work | it's my own shitty percona-extradb cluster | 22:02 |
rm_work | running on small VMs lol | 22:02 |
johnsom | Well, that and if you have a large number, we probably don't have proper indexes on that table. | 22:02 |
rm_work | hmmmmm | 22:02 |
*** sshank has quit IRC | 22:03 | |
*** cpusmith_ has quit IRC | 22:03 | |
johnsom | Can you run a manual SQL query? | 22:03 |
rm_work | yup yup | 22:04 |
johnsom | Give me a minute to build a query | 22:05 |
johnsom | select * from amphora_health where busy = 0 and last_update < now(); | 22:09 |
johnsom | See how long that takes | 22:09 |
johnsom | Technically it should be select * from amphora_health where busy = 1 and last_update < now() LIMIT 1; but I don't trust sqlalchemy to not pull them all back and THEN give you the first() | 22:11 |
johnsom | show variables like 'innodb_lock_wait_timeout'; | 22:13 |
johnsom | Would give you the timeout, mine is 50 which is seconds. | 22:13 |
rm_work | also 50 | 22:15 |
rm_work | 0.1s | 22:15 |
rm_work | *shruggie* | 22:15 |
rm_work | maybe something with replication taking some time | 22:16 |
rm_work | not sure | 22:16 |
rm_work | it doesn't seem to happen very often | 22:16 |
rm_work | and i haven't seen anything catastrophic happen because ofit | 22:16 |
rm_work | so | 22:16 |
*** sshank has joined #openstack-lbaas | 22:17 | |
johnsom | No, it shouldn't cause a problem. I just means the "deadlocked" transactions rolled back. | 22:18 |
johnsom | So, it would just go back and try again after the next sleep interval | 22:19 |
johnsom | It doesn't happen often? Like how often? I assume failovers are working.... | 22:19 |
rm_work | assuming that didn't immediately cause it to do a failover | 22:19 |
johnsom | No, it would at worst *NOT* cause failovers | 22:20 |
rm_work | ah yeah i wonder if that could be trying to do the lock for the busy flag? | 22:20 |
rm_work | hmmm actually | 22:20 |
rm_work | when i restarted the HMs, it did immediately do one failover <_ | 22:20 |
rm_work | i wonder if that had been pending the whole time | 22:20 |
rm_work | let me trigger a failover and see what happens | 22:21 |
johnsom | It does a select for update, looking for one amp that hasn't received a heartbeat in the interval, then marks it "busy" for further failover work. | 22:21 |
johnsom | It's this https://github.com/openstack/octavia/blob/master/octavia/db/repositories.py#L1072 | 22:21 |
johnsom | This could be more sqlalchemy transaction BS. We could switch this out of the basic "session" to a lock session with auto commit disabled. | 22:22 |
johnsom | I bet that is it. SQLalchemy is being super dumb and NOT encapsulating those two into one transaction like it should. That would make total sense actually of how the "deadlock" is happening. | 22:25 |
johnsom | HM A does the select, HM B does select, HM A "autocommits" the select, HM B gets the lock, HM A goes to update the busy flag and sqlalchemy is being dumb and trying to re-lock. | 22:26 |
rm_work | ummmm | 22:26 |
johnsom | rm_work Want me to push a fix? | 22:26 |
rm_work | i don't think failovers are happening | 22:26 |
johnsom | To try at least? | 22:26 |
rm_work | ah i haven't read what you said | 22:26 |
rm_work | AH | 22:27 |
rm_work | one of them printed this too at the end: | 22:27 |
rm_work | [SQL: u'SELECT amphora_health.amphora_id AS amphora_health_amphora_id, amphora_health.last_update AS amphora_health_last_update, amphora_health.busy AS amphora_health_busy \nFROM amphora_health \nWHERE amphora_health.busy = false AND amphora_health.last_update < %(last_update_1)s \n LIMIT %(param_1)s FOR UPDATE'] [parameters: {u'param_1': 1, u'last_update_1': datetime.datetime(2017, 8, 11, 21, 58, 59, 441736)}] | 22:27 |
johnsom | Well, don't forget the default timeout before failover triggers is a bit long by default | 22:27 |
rm_work | johnsom: i set the time to like 12 hours ago | 22:27 |
johnsom | Check interval | 22:27 |
rm_work | yeah but.... | 22:27 |
rm_work | i've tried it a few tiems | 22:27 |
rm_work | it keeps getting updated before it can actually trigger | 22:27 |
johnsom | Yeah | 22:28 |
johnsom | rm_work So test patch or no? | 22:29 |
rm_work | hmm maybe | 22:29 |
rm_work | let me delete this VM | 22:29 |
rm_work | and see if i can get it to trigger | 22:29 |
johnsom | That will work.... | 22:29 |
*** leitan has joined #openstack-lbaas | 22:30 | |
rm_work | ummm yeah | 22:31 |
rm_work | i think once that deadlock happens once | 22:31 |
rm_work | it stops doing failover checks | 22:31 |
rm_work | and it happens almost instantly | 22:31 |
rm_work | after starting the services | 22:32 |
rm_work | :/ | 22:34 |
johnsom | Yeah, I could see how if sqlalchemy is dumb that could happen. Running tox now | 22:37 |
rm_work | what's the change | 22:39 |
rm_work | i'm applying it by hand anyway | 22:39 |
rm_work | to test | 22:39 |
rm_work | johnsom: | 22:43 |
johnsom | Just a sec, making sure this is just a test issue and not something else | 22:43 |
*** kbyrne has joined #openstack-lbaas | 22:44 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix health monitor DB locking. https://review.openstack.org/493252 | 22:44 |
johnsom | There you go, there is a unit test false positive I need to fix (side effect is using None, which doesn't work with this change | 22:45 |
*** sshank has quit IRC | 22:45 | |
rm_work | nope | 22:52 |
rm_work | DBDeadlock: (pymysql.err.InternalError) (1213, u'WSREP detected deadlock/conflict and aborted the transaction. Try restarting the transaction') [SQL: u'SELECT amphora_health.amphora_id AS amphora_health_amphora_id, amphora_health.last_update AS amphora_health_last_update, amphora_health.busy AS amphora_health_busy \nFROM amphora_health \nWHERE amphora_health.busy = false AND amphora_health.last_update < %(last_update_1)s \n LIMIT | 22:52 |
rm_work | %(param_1)s FOR UPDATE'] [parameters: {u'param_1': 1, u'last_update_1': datetime.datetime(2017, 8, 11, 22, 51, 18, 168861)}] | 22:52 |
rm_work | still got that | 22:52 |
johnsom | You updated all of the o-hm's? | 22:53 |
rm_work | yes | 22:53 |
rm_work | http://paste.openstack.org/show/618232/ | 22:54 |
rm_work | tried again just now too | 22:54 |
rm_work | seems to happen every failover <_< | 22:54 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix health monitor DB locking. https://review.openstack.org/493252 | 22:55 |
johnsom | Hmmm, ok, try this, | 22:55 |
rm_work | oh wait hold on | 22:55 |
johnsom | I didn't like that subtransaction BS anyway | 22:55 |
rm_work | i missed one thing | 22:55 |
rm_work | oh, no i didn't | 22:55 |
rm_work | k yeah trying the new thing | 22:56 |
rm_work | worse | 22:58 |
rm_work | much worse | 22:58 |
rm_work | lol | 22:58 |
*** ssmith has joined #openstack-lbaas | 22:58 | |
rm_work | http://paste.openstack.org/show/618233/ | 22:58 |
johnsom | Ah, yeah, the oslo_db thing... Just sec | 22:59 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix health monitor DB locking. https://review.openstack.org/493252 | 23:02 |
johnsom | Forgot that it auto-magically starts the session on your behalf | 23:02 |
johnsom | rm_work give that a spin | 23:03 |
rm_work | yeah | 23:03 |
rm_work | it's spinning currently | 23:03 |
johnsom | +1 | 23:03 |
rm_work | erg | 23:04 |
rm_work | http://paste.openstack.org/show/618234/ | 23:05 |
rm_work | ah the line numbers are off by one in health_manager because i didn't use your blank line | 23:05 |
johnsom | That is ok | 23:06 |
rm_work | yeah just letting you know | 23:06 |
rm_work | in case it was confusing | 23:06 |
johnsom | Well, maybe it's against the update thread.... | 23:08 |
johnsom | Can you grab SHOW ENGINE INNODB STATUS from the DB after that happens? | 23:09 |
rm_work | what are you looking for in this | 23:14 |
rm_work | it's hufe | 23:14 |
rm_work | *huge | 23:14 |
johnsom | There should be a locks and/or deadlocks section. | 23:14 |
johnsom | (I haven't seen one in a while) | 23:14 |
rm_work | hmm | 23:15 |
*** fnaval has quit IRC | 23:16 | |
rm_work | http://paste.openstack.org/show/618235/ | 23:17 |
rm_work | whelp this is kinda shitty | 23:27 |
johnsom | Doing another spin | 23:30 |
johnsom | 3 minutes | 23:30 |
johnsom | There is another report we can run on mysql, but it dumps to the mysql error log which I assume you don't have access to | 23:31 |
rm_work | i am (g)root | 23:31 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix health monitor DB locking. https://review.openstack.org/493252 | 23:31 |
johnsom | Ok, so that wraps the only other place that touches that table in a non-autocommit as well | 23:32 |
rm_work | one advantage of spinning my very own snowflake of a sql cluster | 23:32 |
*** sshank has joined #openstack-lbaas | 23:32 | |
johnsom | Well, give that a go, then we will dump some logs | 23:33 |
rm_work | still deadlocking | 23:35 |
johnsom | SoB | 23:35 |
johnsom | Ok, so mysql fun | 23:36 |
johnsom | SET GLOBAL innodb_print_all_deadlocks = 'ON'; | 23:37 |
johnsom | Re-trigger the deadlock, then in the mysql error log it should have dumped details | 23:37 |
rm_work | retriggered | 23:40 |
rm_work | where's the error log... *looks* | 23:40 |
johnsom | /var/log/mysql/octavia/error | 23:41 |
johnsom | /var/log/mysql/octavia/error.log | 23:41 |
rm_work | errr | 23:41 |
rm_work | the mysql server? | 23:41 |
rm_work | does it know about octavia? lol | 23:41 |
johnsom | octavia is the database name | 23:42 |
johnsom | oh, maybe it is just /var/log/mysql | 23:42 |
rm_work | uhh | 23:42 |
rm_work | hmmm | 23:42 |
johnsom | it's under /var/lib/mysql/octavia for the data files | 23:42 |
rm_work | maybe percona is different | 23:42 |
johnsom | Yeah, check /var/lib/mysql/octavia | 23:42 |
johnsom | or just /var/lib/mysql | 23:43 |
rm_work | ah some stuff here | 23:43 |
rm_work | unrelated but this is interesting | 23:44 |
rm_work | 2017-08-11T23:43:49.406278Z 80453 [Warning] InnoDB: Cannot add field `l7rule_3_value_273` in table `tmp`.`#sql_2a5a_0` because after adding it, the row size is 8127 which is greater than maximum allowed size (8126) for a record on index leaf page. | 23:44 |
johnsom | Yeah, I have that too | 23:44 |
johnsom | https://bugs.mysql.com/bug.php?id=77398 | 23:44 |
johnsom | No good answer there however | 23:45 |
rm_work | hmmm | 23:46 |
rm_work | not seeing it | 23:46 |
rm_work | maybe need to set that on every server | 23:46 |
rm_work | funtimes | 23:47 |
xgerman_ | if we only stuck to postgressā¦ | 23:47 |
johnsom | Hahahaha | 23:47 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!