We’ve had a bunch of new servers in place for around 3 months now. They seem to be working well and are performing just fine.
Then, out of the blue, our monitoring started throwing alerts on seemingly random servers. Our queues were building up – basically, database performance had dropped dramatically and our processing scripts couldn’t stuff data into the DBs fast enough.
What could be causing it?
I took a quick glance at our monitoring and various statistics confirmed the problems. You can clearly see in the following graphs that CPU Wait and Load Average increased at around 14:30, and at the same time MySQL throughput and command counters dropped dramatically:So, I’ve found the symptoms; what could be the cause?
Digging around in the system logs, I found the following lines (emphasis is mine):
Apr 26 12:55:31 b008 Server Administrator: Storage Service EventID: 2176 The controller battery Learn cycle has started.: Battery 0 Controller 0 Apr 26 12:56:36 b008 Server Administrator: Storage Service EventID: 2266 Controller log file entry: Battery is discharging: Battery 0 Controller 0 Apr 26 12:56:36 b008 Server Administrator: Storage Service EventID: 2248 The controller battery is executing a Learn cycle.: Battery 0 Controller 0 Apr 26 14:21:52 b008 Server Administrator: Storage Service EventID: 2278 The controller battery charge level is below a normal threshold.: Battery 0 Controller 0 Apr 26 14:21:53 b008 Server Administrator: Storage Service EventID: 2188 The controller write policy has been changed to Write Through.: Battery 0 Controller 0 Apr 26 14:21:53 b008 Server Administrator: Storage Service EventID: 2199 The virtual disk cache policy has changed.: Virtual Disk 0 (Virtual Disk 0) Controller 0 (PERC 6/i Adapter) Apr 26 14:55:06 b008 Server Administrator: Storage Service EventID: 2177 The controller battery Learn cycle has completed.: Battery 0 Controller 0 Apr 26 14:55:21 b008 Server Administrator: Storage Service EventID: 2247 The controller battery is charging.: Battery 0 Controller 0 Apr 26 14:55:21 b008 Server Administrator: Storage Service EventID: 2278 The controller battery charge level is below a normal threshold.: Battery 0 Controller 0 Apr 26 15:25:41 b008 Server Administrator: Storage Service EventID: 2279 The controller battery charge level is operating within normal limits: Battery 0 Controller 0 Apr 26 15:25:41 b008 Server Administrator: Storage Service EventID: 2189 The controller write policy has been changed to Write Back.: Battery 0 Controller 0 Apr 26 15:25:42 b008 Server Administrator: Storage Service EventID: 2199 The virtual disk cache policy has changed.: Virtual Disk 0 (Virtual Disk 0) Controller 0 (PERC 6/i Adapter) Apr 26 18:50:26 b008 Server Administrator: Storage Service EventID: 2358 The battery charge cycle is complete.: Battery 0 Controller 0
Bingo!
The RAID controller is running a battery learn cycle. When battery charge drops below a certain level, cache Write Back is disabled, and that kills disk performance. It seems this test is enabled by default and is configured to run every 90 days. Of course, Sod’s law dictates that it has to trigger in our busy period!
The Dell tools are not able to turn off this feature, but the LSI MegaCli tool can (Dell PERC 6/i controllers are re-badged LSI cards). I’ve run the following script on all servers (thanks to burr86 on ##infra-talk @ Freenode):
#!/bin/sh TMPFILE=$(mktemp -p /tmp bbu.relearn.off.XXXXXXXXXX) || exit 1 echo "autoLearnMode=1" > $TMPFILE # or =0 to enable the bbu relearn MegaCli -AdpBbuCmd -SetBbuProperties -f$TMPFILE -a0 rm -f $TMPFILE
I wrote a puppet class to run this script on all nodes:
class megacli::check {
exec{'perc_bbu_autolearn.sh':
require => Class['megacli::install'],
cwd => '/tmp',
path => '/usr/local/bin:/bin',
unless => 'MegaCli -AdpBbuCmd -GetBbuProperties -a0 | grep -q "Auto-Learn Mode: Disabled"'
}
}omconfig storage battery action=startlearn controller=0 battery=0




Comments