Archive for December, 2014

29 December, 2014

Two Elasticache Redis Tips

by gorthx

This is all in the Amazon docs about Elasticache Redis. In case you don’t read them, or have inherited a system, and want to keep your data, allow me to share what I learned this week:

1. Make sure you have appendonly enabled1. It is not enabled by default. Turning this on writes all changes to your data to an external file (AOF, for ‘append only file’). Without this, your database only exists in memory. So guess what happens when your instance reboots2? Say bye-bye to your data and hello to restoring from a backup.

Which brings me to item #2:

2. You can’t restore to an existing instance, of course – you have to delete the instance first, then restore. However, deleting the instance also deletes all automated snapshots associated with that instance. This can be a bit surprising. (The ‘are you sure you want to delete this instance’ message does not includes this information.) What I’ve done to get around this is make a manual copy of the automated snapshot I want to restore, prior to deleting the instance. Manual snapshots will stick around until you delete them, regardless of the status of the instance. IME.

You also probably want to configure the Multi-AZ failover, now that it’s available. (https://forums.aws.amazon.com/ann.jspa?annID=2709)


1 – http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html
2 – Keep in mind that this is a managed service; you do not have control over whether it reboots or not.

Advertisements
15 December, 2014

Simple test for autovacuum log messages

by gorthx

I had reason recently to suspect that autovacuum jobs weren’t being properly logged on my RDS instance. Instead of compulsively re-running “SELECT relname, last_autovacuum FROM pg_stat_user_tables” while I waited for one to happen, I set up this quick test:

connect to database and do the following:

-- verify settings
-- log_autovacuum_min_duration should be 0 ("log all") for the purposes of this test
SHOW log_autovacuum_min_duration;
-- vac messages are LOG level messages, so the default value 'warning' should suffice here
SHOW log_min_messages;

-- set up a table with a really low vac threshold
DROP TABLE IF EXISTS vac_test;
CREATE TABLE vac_test
(id serial primary key
, name text)
WITH (autovacuum_vacuum_threshold = 2, autovacuum_vacuum_scale_factor = 0)
;

-- add some data
INSERT INTO vac_test(name)
VALUES
('ankeny'),('burnside'),('couch'),('davis'),('everett'),('flanders'),('glisan'),('hoyt'),('irving')
;

-- check the stats/vac jobs
\x
SELECT * FROM pg_stat_user_tables WHERE relname = 'vac_test';

-- remove enough data to trigger an autovac
DELETE FROM vac_test WHERE id < 4;

-- check stats/vac jobs again
SELECT * FROM pg_stat_user_tables WHERE relname = 'vac_test';
\watch
-- assumes 9.3+
-- wait until you see an autovacuum job in the table
-- it'll help if you have autovacuum_naptime set to something short

Then go check the logs. You should have a helpful message about an automatic vacuum:


2014-12-07 21:01:32 PST LOG: automatic vacuum of table "postgres.public.vac_test": index scans: 1
pages: 0 removed, 1 remain
tuples: 3 removed, 6 remain
buffer usage: 60 hits, 4 misses, 4 dirtied
avg read rate: 33.422 MB/s, avg write rate: 33.422 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec

If you don’t, well, that’s the $64,000 question, isn’t it.

8 December, 2014

PDXPUG lab report – BDR

by gorthx

For the last PDXPUG lab of the year, we tried out BiDirectional Replication (BDR). Five of us just set up VMs on our laptops and followed the instructions on the wiki.

We only had about 90 minutes time for this lab, so the goal was to get a basic configuration up & running, understand the available configuration parameters, and then (time permitting) break it – because that’s how you learn to put it back together.

Alexander said it worked very well in his tests; Robert set about breaking it and found an interesting edge case involving updates to primary keys. (Advisable or not, we all have a customer who’s going to do it!)

Maher and I were doing pretty well with our setups until we tried configuring BDR between our two machines. After wrestling with VMWare’s network settings and getting absolutely nowhere, I realized this all felt very familiar … Oh right, CentOS’s pre-configured firewall1. Which does not allow Postgres ports, natch. Once we fixed that, our machines could at last communicate correctly with each other, but we ran out of time before we could get BDR working between them. (Which led to some jokes about “NDR”.)

Craig Ringer posted yesterday about the work that’s gone into this project thus far, and some of the side benefits. BDR is a particularly tricky problem to solve; kudos to the team for all the hard work.

The Quick Start guide is very easy to follow. I’m also very happy with the quality of the log messages available from BDR. I encourage you to check it out for yourself!



1 – Took me a bit of poking around to find it; it was moved from “System Administration” to “Sundry” in CentOS 7.

Tags: ,