2 February, 2015

Updating My Linux Command line Toolbox, episode 3

by gorthx

Part 2

This week’s tips:

1. ulimit -a will show you all settings, plus the units.

2. crontab -l -u [user] will read out another user’s crontab for you (assuming you have the right perms)

3. and what I call “diff-on-the-fly” – pass the output of shell commands to diff. I like this one because I don’t make a bunch of “temporary” files that I forget to clean up later.

diff <([shell commands]) <([other shell commands])

For example, I need to compare ids in two files, but they’re in different fields in each file, and not in the same order:

diff <(cut -d"," -f1 file1 | sort -u) <(cut -d"," -f3 file2 | sort -u)

12 January, 2015

No more cursing* with ddlgenerator

by gorthx

Do you have to load data from funky files into postgres frequently? I do. I get a few requests per week of the “hey, can you give me a mass data extract for things that match the criteria in this spreadsheet” variety. Frequently, these are windows-formatted all-in-one-line “csv” files.

A year or so ago, I would have reformatted the file, reviewed the data, figured out appropriate data types for the various fields (and sometimes there are dozens of fields), written some SQL to create the table, tried to load the data, cursed a bit, lather, rinse, repeat.

I heard about Catherine’s ddl generator https://github.com/catherinedevlin/ddl-generator at last year’s PgOpen, and these days, all I do is this:

ddlgenerator --inserts postgres [datafile.csv] > datafile.sql

…then log into my database, and run:

\i datafile.sql

et voila.

As I said on twitter, it’s not just the time but the annoyance this tools saves me that I really value.

Try it out!


* at least, not about this.

Tags: , ,
29 December, 2014

Two Elasticache Redis Tips

by gorthx

This is all in the Amazon docs about Elasticache Redis. In case you don’t read them, or have inherited a system, and want to keep your data, allow me to share what I learned this week:

1. Make sure you have appendonly enabled1. It is not enabled by default. Turning this on writes all changes to your data to an external file (AOF, for ‘append only file’). Without this, your database only exists in memory. So guess what happens when your instance reboots2? Say bye-bye to your data and hello to restoring from a backup.

Which brings me to item #2:

2. You can’t restore to an existing instance, of course – you have to delete the instance first, then restore. However, deleting the instance also deletes all automated snapshots associated with that instance. This can be a bit surprising. (The ‘are you sure you want to delete this instance’ message does not includes this information.) What I’ve done to get around this is make a manual copy of the automated snapshot I want to restore, prior to deleting the instance. Manual snapshots will stick around until you delete them, regardless of the status of the instance. IME.

You also probably want to configure the Multi-AZ failover, now that it’s available. (https://forums.aws.amazon.com/ann.jspa?annID=2709)


1 – http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html
2 – Keep in mind that this is a managed service; you do not have control over whether it reboots or not.

15 December, 2014

Simple test for autovacuum log messages

by gorthx

I had reason recently to suspect that autovacuum jobs weren’t being properly logged on my RDS instance. Instead of compulsively re-running “SELECT relname, last_autovacuum FROM pg_stat_user_tables” while I waited for one to happen, I set up this quick test:

connect to database and do the following:

-- verify settings
-- log_autovacuum_min_duration should be 0 ("log all") for the purposes of this test
SHOW log_autovacuum_min_duration;
-- vac messages are LOG level messages, so the default value 'warning' should suffice here
SHOW log_min_messages;

-- set up a table with a really low vac threshold
DROP TABLE IF EXISTS vac_test;
CREATE TABLE vac_test
(id serial primary key
, name text)
WITH (autovacuum_vacuum_threshold = 2, autovacuum_vacuum_scale_factor = 0)
;

-- add some data
INSERT INTO vac_test(name)
VALUES
('ankeny'),('burnside'),('couch'),('davis'),('everett'),('flanders'),('glisan'),('hoyt'),('irving')
;

-- check the stats/vac jobs
\x
SELECT * FROM pg_stat_user_tables WHERE relname = 'vac_test';

-- remove enough data to trigger an autovac
DELETE FROM vac_test WHERE id < 4;

-- check stats/vac jobs again
SELECT * FROM pg_stat_user_tables WHERE relname = 'vac_test';
\watch
-- assumes 9.3+
-- wait until you see an autovacuum job in the table
-- it'll help if you have autovacuum_naptime set to something short

Then go check the logs. You should have a helpful message about an automatic vacuum:


2014-12-07 21:01:32 PST LOG: automatic vacuum of table "postgres.public.vac_test": index scans: 1
pages: 0 removed, 1 remain
tuples: 3 removed, 6 remain
buffer usage: 60 hits, 4 misses, 4 dirtied
avg read rate: 33.422 MB/s, avg write rate: 33.422 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec

If you don’t, well, that’s the $64,000 question, isn’t it.

8 December, 2014

PDXPUG lab report – BDR

by gorthx

For the last PDXPUG lab of the year, we tried out BiDirectional Replication (BDR). Five of us just set up VMs on our laptops and followed the instructions on the wiki.

We only had about 90 minutes time for this lab, so the goal was to get a basic configuration up & running, understand the available configuration parameters, and then (time permitting) break it – because that’s how you learn to put it back together.

Alexander said it worked very well in his tests; Robert set about breaking it and found an interesting edge case involving updates to primary keys. (Advisable or not, we all have a customer who’s going to do it!)

Maher and I were doing pretty well with our setups until we tried configuring BDR between our two machines. After wrestling with VMWare’s network settings and getting absolutely nowhere, I realized this all felt very familiar … Oh right, CentOS’s pre-configured firewall1. Which does not allow Postgres ports, natch. Once we fixed that, our machines could at last communicate correctly with each other, but we ran out of time before we could get BDR working between them. (Which led to some jokes about “NDR”.)

Craig Ringer posted yesterday about the work that’s gone into this project thus far, and some of the side benefits. BDR is a particularly tricky problem to solve; kudos to the team for all the hard work.

The Quick Start guide is very easy to follow. I’m also very happy with the quality of the log messages available from BDR. I encourage you to check it out for yourself!



1 – Took me a bit of poking around to find it; it was moved from “System Administration” to “Sundry” in CentOS 7.

Tags: ,
3 November, 2014

PgConf.EU recap

by gorthx

I’m safely home from PgConf.EU. Madrid at this time of year was glorious, particularly to this Portlander. (I came home to a steady 12*C and rainy for the next week or … so ;))

We had over 300 attendees, making this the biggest Postgres conference to date, I hear. Of course, I couldn’t get to every talk I wanted (does that ever happen?), but here are some highlights:

Performance Archaeology was a thorough review of how Postgres performance has improved (or not) from version to version. I’m a sucker for benchmarks, and it makes me very happy that Tomas Vondra did this work :)

“Who’s the Fairest of Them All? Pg Interface Performance Comparison” was good from an informational standpoint (ODBC pretty much sucks) but also from a test design standpoint (hey, a valid use case for a Cartesian join!) Most relevant tip for me: complaints about db performance usually turn out to be caused by running queries returning one row at a time, one connection each – and usually from an ORM.

The demo of 3D rendering from Vincent Picavet’s “PostGIS Latest News” looked very promising. There’s a docker container available on his github; make sure you follow the setup instructions. I’m also excited about SP-GiST, spatial GiST indexes, which will provide faster reads and is 3X faster to build. It’s a WIP, and so far it only works on points.

XoF’s talk on “Finding and Repairing Data Corruption” covered some case histories from PgExperts. You all know I like the “war stories”; one thing I like especially about XoF’s talks is he includes “oh yeah, btw, don’t do [x] to try to fix this because you’ll make things worse”. Additional recommendation: disable autovacuum while you’re debugging corruption, because you don’t want it kicking off & changing things.

As usual, Simon and Alvaro packed a ton of info into “Locks Unpicked”. The most immediately useful tip for me was how to avoid the ACCESS EXCLUSIVE lock when adding an FK; do it in two steps: 1. ALTER TABLE [blahdeblah] NOT VALID 2. ALTER TABLE [blahdeblah] VALIDATE CONSTRAINT.

I didn’t get to attend Dimitri’s “You Better Have Tested Backups”, but I was in on a rehearsal. My reaction can be summed up with “You had to do what?!” If this talk didn’t scare you, I don’t know what will.

Craig Ringer’s talk about usability started with a round of “Error Message Jeopardy”, and included a reminder that we were all new once, and have forgotten how much we know. I personally accidentally tried to run psql on a -FC pg_dump just last week, and really appreciate the addition of the HINT message! I also hadn’t heard about the update problems on Yosemite.

Stephen Frost’s “Hacking Postgres” was one of my favorites. We got a tour of the source tree, backend components, and some background about the community coding conventions. (“Programming in Postgres may not always be standard C.”)
General advice:
– check the mailing list for people working on similar problems
– create your patch as a context diff or git –diff
– read your actual patch before you submit it, just in case you did something dorky.

Don’t forget to post your slides & leave conference and speaker feedback.

It was wonderful meeting people I’ve only known from the mailing lists & IRC. (For example.) Thanks very much to the PgConf.EU and SPI for helping me out! I hope to see you next year in Vienna.

6 October, 2014

RDS: Three weeks in

by gorthx

I’ve spent the past few weeks learning my way around Amazon’s RDS offering (specifically Postgres, and a bit of elasticache). It’s a mixed bag so far; for every feature I think “Hey, this is neat!” I find at least one or two others that are not so thrilling.

One of the things that may annoy you if you’re used to running your own Pg server is not having “real” superuser access to your own cluster. There’s an rdsadmin database which you won’t have access to, and a couple of rds users as well. This is part of Amazon’s security implementation to protect all their users from destroying each other.

You need to exclude the rdsadmin database out of any management queries you run, like so:
SELECT datname, pg_size_pretty(pg_database_size(datname))
FROM pg_database
WHERE datname NOT IN ('postgres','template0','template1', 'rdsadmin')
ORDER BY datname;

Otherwise you’ll get a permissions error.

Today, I had this bit of fun:
pg_dumpall --globals-only
ERROR: permission denied for relation pg_authid

So no dumping your roles. (No pg_dumpall, period.) I don’t have high hopes for an alternative.

I also haven’t found a place to report bugs – the “Feedback” button only allows you to contact tech support if you have a contract. So far I’m relying on the “I know someone who knows someone” method and the RDS user forums; AWS team members respond fairly quickly to posts there.

Aside from all that, I’m primarily interested in automating my instance management, so I’ve focused on the CLI tools.

My first thought was “Egads, I can’t get away from Java, can I”, but installing the toolkit turned out to be the easiest part. I set my $JAVA_HOME as outlined here to avoid that annoying “Unable to find $JAVA_HOME” error.

The CLI support is much more extensive than I expected – you can manage pretty much everything, very easily, once you find your way around and learn the appropriate incantation. Which is my biggest complaint: the docs on the web and the cli help don’t always match each other, and sometimes neither are correct. It cost me a significant amount of startup time flipping back and forth between the cli help, the web help, and just flat-out experimenting and cursing that mysterious “Malformed input” error message. (I probably have unrealistically high standards after working with the Pg docs for so many years.)

Fun stuff you can do:
RDS offers “event subscriptions” to help you keep tabs on your instance health (failover, storage, etc)1. They’re pretty easy to configure from the web console, but once you’ve done so, there’s no way to view or edit them except from the CLI. (At least, not that I can find.)

You can grab your Cloudwatch metrics, if you need to “roll your own” and integrate them into an existing monitoring system. (Yes, I briefly considered this!)

Log files are also accessible through the CLI for watching or downloading. It’s a huge improvement on the tiny green-text-on-black background on the web console. There’s no glob expression matching, so you have to request them one at a time. If you request a log file that doesn’t exist, you don’t get an error.

log_line_prefix is one of the unconfigurable GUCs on RDS, so if you are planning to use pgbadger 2, specify Amazon’s format as outlined at the bottom of this page.



1 – Be warned that restoring a snapshot will generate an “instance deleted” alert for your newly-restored instance. Your on-call person may not appreciate this.

2 – Alternatively, the pg_stat_statements extension is available on RDS, so you could get query stats this way.

29 September, 2014

My PgConf.EU Schedule

by gorthx

Yep, I’m headed to Madrid! I’ll be reprising my Autovacuum talk from SCALE, and am really looking forward to meeting some new folks. I’ll be helping out at the conference in some capacity, so come say hello.

For reference, the conference schedule is here:
http://www.postgresql.eu/events/schedule/pgconfeu2014/

Other talks I plan to attend:

Wednesday:
Performance Archaeology sounds pretty cool!

Joe Conway’s Who’s the Fairest of Them All, since I didn’t get to catch it at PgOpen.

Open Postgres Monitoring

I’m interested in hearing Devrim’s opinions about Pg filesystems. (And pgpool, but that’s a discussion for the pub. ;) )

Next, a case study from Dimitri, this one on backups. (This sounds like one of those talks that will have people muttering “oh crap!!” and running out of the room.)

Thursday:
Dmitri again, on pgloader, because I always have data loading needs.

I want to see all three sessions in the next time slot (Logical decoding, PostGIS, and authentication), so I’ll wait until the day of to make up my mind.

Hmm Bruce’s indexing talk, or Christophe’s on Data Corruption?

I hope I never have to join 1 million tables.

Locks unpicked, Analytical Postgres, and of course the Lightning Talks will finish out the day.

Friday:
Unit testing with PgTAP

Disaster Planning and Recovery

Logical decoding for auditing

Replication of a single database? Sign me up!

Saturday I plan to do touristy things: check out the park, a museum or two, and hopefully a fabric shop, before my flight out. If anyone has any recs, I’d love to hear them.

22 September, 2014

PgOpen 2014 – quick recap

by gorthx

Many thanks to the speakers, my fellow conference committee members, and especially our chair, Kris Pennella, for organizing the best PgOpen yet.

(Speakers: please upload your slides or a link to your slides to the wiki.)

I came back with a big to-do/to-try list: check out Catherine Devlin’s DDL generator, familiarize myself with the FILTER aggregates in 9.4, make a web interface to the PDXPUG talks db (on a tiny little heroku instance), re-do the examples from the PostGIS tutorial, etc. Plus apparently I have a tiny little patch to write (flw). Many thanks to Denish Patel of OmniTI and Will Leinweber of Heroku for the personalized help sessions.

All in all, it was a wonderful conference & I’m looking forward to 2015’s version. If you’re interested in being on next year’s committee, let us know at program2014 at postgresopen.org.

15 September, 2014

PDXPUGDay Recap

by gorthx

Last weekend we held the biggest PDXPUGDay we’ve had in a while! 5 speakers + a few lightning talks added up to a fun lineup. About 1/3 of the ~50 attendees were in town for FOSS4G; I think the guy from New Zealand will be holding the “visitor farthest from PDXPUG” for a good long while. Some folks from SEAPUG daytripped down (hi!) and we made plans for PDXPUG to road trip up there, probably for next year’s LinuxFestNW.

My highlights:
HSTORE, XML, JSON, and JSONB – David Wheeler
– Pg’s XML features are pretty neat, but I still think XML needs to DIAF. Perhaps that’s just my previous experience speaking.
– We renamed the HSTORE containment operator (@>) to “ice cream cone operator”, courtesy Mark Wong.
– Operations on JSON are slower than on HSTORE. That’s interesting.
– The storage overhead for JSONB is higher than for regular JSON, because it doesn’t compress very well. Josh B took an audience vote on improving compression at the expense of slowing down operations, and it was pretty evenly split.
– As usual, David included benchmarks and gave good overviews of when to use which data type.

Snapshotted Data Versioning – Eric Hanson
Eric gave a talk about this at PDXPUG last year and was showing an updated version of what Aquameta’s up to. Eric’s philosophy is “make everything data, and then make a UI for it”.
– Implemented FUSE for Pg, bidirectional, so you can change your data by making updates directly in the database or by editing a text file on the filesystem. I believe this was described as “perverse” by a certain audience member.

Data Near Here – Veronika Megler

– Another update to a previous PDXPUG talk
– Scientists report that they spend up to 80% of their time just finding data relevant to their research. Not collecting – locating previously saved data. What a time sink.
– Parsers for each data format have to be custom coded.

Portal Update – Kristin Tufte
– Another example of pulling data from many different sources in many “unique” formats!
– Current research on pedestrian counts uses the crosswalk buttons as a potential method to count pedestrians.
– I’d like to get ahold of the traffic light data, to see if the light at 32nd and Powell really is the longest light in Portland, or if that’s just my imagination.

AWS Faceoff (Cloud Shootout!) – Josh Berkus
I don’t care too much about Postgres on AWS – if I’m going to go that route, I’ll buy my own hardware, TYVM.
– RDS has a limited number of extensions installed, and PL/R isn’t one of them.* They did just add pg_stat_statements, which is cool. The Amazon support people are taking requests, and are attentive to the community, according to Josh. (I don’t have enough experience with that to have an opinion.)
– performance on RDS just isn’t that great; Josh got 325 TPS read/write, and 1430 TPS read-only.
– Then there was the cost comparison; RDS and Heroku don’t look that great compared to hosting it yourself, but you’d need to factor in the cost of support staff there.

Thanks for a great event!


* I decided to see for myself what extensions were available. Mark warned me “don’t shed too many tears for what they don’t have”. To my surprise, many of my favorites are available – pgperl, plpgsql, postgis, and tablefunc! (SO EXCITE MUCH PIVOT)

Check what’s available on your instance with this command:
SHOW rds.extensions;

Note that “SELECT * FROM pg_available_extensions ORDER BY name;” will show you a bunch of stuff that’s not necessarily available on RDS. (Something I wish they’d fix.)

Follow

Get every new post delivered to your Inbox.