12 January, 2024

Cycliq Bike Camera Review: Fly12 Sport and Fly6CE Gen3

by gorthx

I purchased replacement Cycliq cameras in late October 2022 when I checked my cam footage after a particularly frightening incident of harassment by a driver, and discovered the cheaper camera I was using hadn’t captured the license*.

My main reason for having these is still driver accountability. (And I shouldn’t have to do this! But that’s another post.)

I use the Fly12 Sport as my front cam and the Fly6 Gen 3 as the rear cam. Battery life is still excellent; the Fly12 lasts at least 5 hours; I can get 4 out of the Fly6.

What’s changed from the previous models:

The mounts have supposedly been “improved” in some way, but the cameras still snap in to the old mounts just fine. Getting the cams on and off the new mounts the first few times was VERY difficult – I had to get out the vise grips on the Fly6.

The Fly6 came with a non-elastic leash; I really prefer the elastic ones from the prior version.

Both cams now have nifty silicon cases for extra water protection.

The case for the Fly6 comes with two lens protectors, one each for the camera and light. They’re a lot easier to put in when the case is off the camera, but it’s fussy to put the case on without making the protectors pop back out. You also have to take the case off for charging, and connecting to a PC to view your footage. (I just found the smaller of the lens protectors a couple of weeks ago, after about a year of it flying off and getting lost.)

Bluetooth pairing (with my mid-range Android phone) works on the first try about 75% of the time, which is a big improvement – good enough that I don’t have to mess with the desktop app at all. (I don’t know if the app is even available anymore.)

These keep time now when they’re off; I don’t have to resync them much.

They’ve been much more reliable so far. I haven’t had to do a factory reset on either camera.

The Fly 12 Sport is lighter and smaller than the original. It also comes with an info screen, which tells you the amount of battery left, the time, and the lighting mode. I like this more than I thought I would.

The Fly12 Sport has more resolution and frame rate options. HDR is available at 1080p 30fps. I experimented with these settings a bit, and settled on 1440p and 30 fps plus image stabilization. The HDR is nice, but you can’t use it with image stabilization, which is more important on our roads here.

For the Fly 6, I’m using 1080p 30fps with a 5-minute video length.

Resulting .mp4 file sizes
Fly12 Sport: 1440 30fps + 5 minutes = 1.1 GB
Fly6 Gen3: 1080p 30fps + 5 minutes = 800 MB

The manual still says “We highly recommend formatting your microSD card regularly.” What is “regular”? Are we talking daily? Once a quarter? It would be nice to have an “after [x] hours of use” guideline.

I accidentally bought the wrong case for the Fly12 (got the one that fits my dead camera) and the customer service folks were good about exchanging it, I just had to pay shipping.

HOWEVER. If you can make your website pop up “hey, you’re buying product x, do you want these accessories with it?” it should also be able to check the order: “hey, you’re purchasing accessory y which doesn’t fit product x in your cart – are you sure?” because that could save a lot of hassle. I can’t be the only person who made this mistake.

All that’s left to test is longevity and durability. I’ve had this set about 14 months and really hope they last past the 18 month mark, where the previous set started to die.


  • It turned out that the car itself was distinctive enough that the perp was caught and charged, but that took several months.
3 March, 2023

One Thing You Wish You Knew While Learning PostgreSQL

by gorthx

This month’s PGSQL Phriday #006 question has a very easy answer for me:

\e [filename]

\e (without a filename) lets you edit the most-recently run query and is very helpful for fixing typos, adding/removing columns, etc.

If you like to use \i to run queries from an external file (and who doesn’t!), use \e [filename].

I learned about \e from David Wheeler in the early days of PGXPUG – up to that point I’d employed the time-honored tradition of copying & pasting my queries out of a text editor, so this stands out as a life-changing moment.

10 February, 2023

Quick Logical Replication Checklist

by gorthx

By request…

Before starting the publication/subscription, check your schemas.
(Adjust if you’re not replicating all tables, or have dropped indexes/constraints to make the initial sync go faster)
– Do you have the correct number of tables on both sides?
– Do you have the correct number of indexes on both sides?
– Do you have the same number of constraints on both sides?
– Are all of the indexes and constraints valid?

After starting replication, check the logs on the publisher and the subscriber for errors.
– Investigate anything unusual.
– If you’re getting duplicate key errors, check the old database logs first – this may be normal!
– If it’s not normal, this can indicate something is writing to the subscriber. Find it and make it stop1, then fix the situation2.
– If you are getting duplicate key errors after reversing the direction of repication: did you remember to copy the sequences over?

Then connect to the databases and check them out from psql.

On the publisher:

/* "describe publication" - is this information correct? */
\dRp+


/*
pg_replication_slots.slot_name should be the name of the subscription (as it appears on the subscriber)
pg_replication_slots.type should be 'logical'
pg_replication_slots.active should be 't'
*/
SELECT * FROM pg_replication_slots;


/*
pg_stat_replication.application_name should be the name of the subscription (as it appears on the subscriber)
pg_stat_replication.state should be 'streaming'
*/
SELECT * FROM pg_stat_replication;


/* This query should return nothing unexpected: */
SELECT schemaname, relname FROM pg_stat_user_tables
WHERE relname NOT IN (SELECT tablename FROM pg_publication_tables);


On the subscriber:

/* "describe subscription" - is this information correct? */
\dRs+

/* When replication is caught up, pg_subscription_rel.srsubstate will be 'r' for all tables */
SELECT srrelid::regclass, srsubstate, srsublsn
FROM pg_subscription_rel order by srsubstate;
/*
srsubstate key:
i = initialize
d = data is being copied
f = finished table copy
s = synchronized
r = ready (normal replication)
*/


Spot-check data to verify publisher and subscriber match.


If you switch replication direction, run the above checks again.


1 – You may want to enable `default_transaction_read_only` mode on the subscriber. Yes, logical replication will still work. If you’re on AWS, make the change to your parameter group, as you can’t use `ALTER SYSTEM`.
2 – Fix your data by either starting over from the initial copy step, or refilling just the affected tables (briefly: drop table from publication, refresh subscription, truncate table on the subscriber, add table back to publication, refresh subscription3). Depending on the extent of the damage, it can be better to just start over.
3 – It’s not a bad idea to rehearse this particular scenario in staging.

6 January, 2023

PGSQL Phriday #004: Sequence Survey

by gorthx

Hello! This month’s PGSQL Phriday prompts included a question about scripts. I have a vault of SQL queries, if that counts as scripts…they are all queries I’ve worked up when I had to find a specific piece of information, and the examples on StackOverflow weren’t quite right. I love to share them, and see what other folks come up with – here’s one I’ve been using a lot over the past year.

For background, maybe you’ve seen this error message before:

gabrielle=# INSERT INTO my_stuff (stuff) values ('hello');
ERROR:  nextval: reached maximum value of sequence "my_stuff_id_seq" (2147483647)

Yikes! You need a bigint sequence, now.

There are ways to fix this when it happens; Robert Treat’s mildy-named “Advanced Int->Bigint Conversions” from the PgConfNYC 2021 (video here) has some options.

I personally don’t like that much excitement in my day job. I prefer to catch these situations long before they happen, so I can correct them at my leisure.

Here’s a quick and easy check for “integer sequence that is roughly halfway to overflow”, courtesy Xof:

SELECT * FROM pg_sequences WHERE data_type = 'integer'::regtype AND last_value > (1<<30);

Halfway is usually enough time to implement a fix.

But it misses a specific situation that I’ve seen a fair bit of lately: bigint sequences on integer columns. Robert mentioned in his talk that Django had a bug that caused this; which gave me the idea for this next check. Over the past year, I’ve seen this with at least one other ORM and a manually-managed schema.

For reference, this is the error you get when you’re using a bigint sequence and overflow your integer column:

gabrielle=# INSERT INTO my_stuff (stuff) values ('and goodbye');
ERROR: integer out of range

(If you landed here by searching for this error, note that a bigint sequence isn’t the only cause of this error – it could just be your application trying to insert a stupidly large value into the column. That’s usually simpler to fix.)

Here’s a query that collects both the column datatype and the sequence datatype for all of your sequences, plus the last value used:

SELECT ts.nspname                  AS table_schema
       , tbl.relname               AS table_name
       , c.attname                 AS column_name
       , pg_type.typname::regtype  AS column_datatype
       , sequences.schemaname      AS sequence_schema
       , s.relname                 AS sequence_name
       , sequences.data_type       AS sequence_datatype
       , sequences.last_value
  FROM pg_class s
  JOIN pg_namespace ns  ON ns.oid = s.relnamespace
  JOIN pg_depend d      ON d.refobjid   = s.oid      AND d.refclassid='pg_class'::regclass
  JOIN pg_attrdef ad    ON ad.oid       = d.objid    AND d.classid = 'pg_attrdef'::regclass
  JOIN pg_attribute c   ON c.attrelid = ad.adrelid AND c.attnum = ad.adnum
  JOIN pg_type          ON pg_type.oid = c.atttypid
  JOIN pg_class tbl     ON tbl.oid = ad.adrelid
  JOIN pg_namespace ts  ON ts.oid = tbl.relnamespace
  JOIN pg_sequences sequences ON sequences.sequencename = s.relname
 WHERE s.relkind = 'S'
  AND d.deptype IN ('a', 'n');

I run both of these about once a week and track the rate of change on all integer sequences, and on integer columns that have bigint sequences. Is it perfect? No, but it reduces the likelihood of an outage from this problem. There’s good info in Robert’s talk about other places you can get tripped up, and the fixes would be excellent fire drill type exercises if your team is into that sort of thing. (If you’re getting the feeling you should just go watch that talk… you’re right.)

Have a great weekend!

Addendum: Before you spring into action, check how fast the sequence is incrementing by collecting a few data points, then determine how much time you have:

(2,100,000,000 – [current_value]) / [rate of increase] = time until doom

For example, if the current sequence value is 1,300,000,000 and your sequence is incrementing by 7,000/day:

(2,100,000,000 – 1,300,000,000) / (7,000/day) = about 300 years.

So this particular example isn’t worth the trouble of changing the sequence or column data type, as long as this rate stays the same. It’s not a bad idea to verify that periodically.

23 December, 2022

Quick connection string/connectivity test using DBLINK

by gorthx

Ever had that sinking feeling when you’re setting up logical replication and CREATE SUBSCRIPTION... just kinda sat there? Maybe it eventually timed out?

I used to use foreign data wrappers to troubleshoot database-to-database connectivity problems, but using DBLINK is much easier. (Bonus: can be used to troubleshoot connectivity problems with foreign data wrappers, also!) Hat tip to Xof for sharing it.

-- Create the DBLINK extension in both databases
CREATE EXTENSION dblink;

-- From either side:
SELECT * FROM dblink (
'',
'SELECT 1'
) AS test(i int);

Example successful output:

gabrielle=# SELECT * FROM dblink (
    'port=5433 dbname=gabrielle',
    'SELECT 1'
  ) AS test(i int);

 i 
---
 1
(1 row)
11 December, 2022

Bike Camera Review: Cycliq Fly12CE and Fly6CE Gen2

by gorthx

This is a review of the Cycliq Fly12CE and Fly6CE Gen 2 bike cameras, which I purchased in March 2020. (Note that this version of the Fly6 is no longer available on their website.)

tl;dr: These are decent cameras with long battery life, easy on/off from the mounts, but the bottom line is they are not up for the rigors of year-round biking in the US Pacific Northwest. The Fly6 (rear camera) didn’t quite last 18 months. The Fly12 (front camera) lasted 21. These cameras have a one-year warranty, which IMO is pretty short for something this expensive. I may be having a “Get Off My Lawn” moment.

The details:

My main reason for purchasing bike cameras is driver accountability. My top three requirements:

  • Image stability and detail; gotta be able to read license plates!
  • Long battery life; my regular commute is about 30-45 minutes each way, and I’m out for 4-6 hours for recreational rides.
  • Easy on/off the bike when I have to park it somewhere.
  • Low maintenance.

What was good:
These cams are purpose-built for biking and it shows. They have integrated lights with multiple flash/still modes, a very simple click-in mount system, and stellar battery life. The Fly12 picked up my normal Outside Voice very well, and caught both sides of some important conversations. (The Fly6 only picked up road noise and my rear derailleur shifting, as expected.) Videos are stored as .mp4s and automatically loop and rewrite over old data. The support people aren’t jerks.

What needed work:
Just about everything else.

The video quality was ok; I could read some license plates most of the time. Image stabilization was decent on pavement and minor gravel; it was impossible to get a usable video on aggregate surfaces (e.g. our “Extreme Camera Testbed” of southbound 29th between SE Salmon and Division.) I would not recommend this for recording your mtb rides.

Both cameras are USB chargeable. The Fly6 (rear) has one of those little rubber flaps protecting the port; it broke off after the third ride. This is simply not acceptable. I have a PDW taillight with a nearly identical setup that’s still attached to the light 5 years later. (May be time for a collab?) The Fly12’s port is a bit more protected from the elements by a door.

Interacting with the cams was challenging. Neither would reliably pair (Bluetooth) with any of my (mid-range Android) phones; I was successful maybe 30% of the time. This meant I couldn’t consistently control the cams via the Cycliq Android app.

The desktop app is only available for Windows and Mac. I tried both, but still had frequent problems with the app recognizing the cams.

Viewing the videos always worked when just mounting the cam as an external drive on Windows, Mac, or Linux.

The timestamps on the videos were never right – the cam doesn’t keep time when it’s turned off. Apparently that is a regular feature/bug of action cameras. You have to sync or reset before each use and I do not have time for that, especially with the Bluetooth flakiness.

The Fly12 was much heavier than I’d want to wear on my head. For me, this camera is handlebar-mount only.

The Fly6 (the rear cam) has minimal installation options; I have a Thudbuster and thus couldn’t install it on my seatpost. Plus I have a trunk bag, and that would make for a boring video. So I rigged it up to go on one of the stays on my rack (a Salsa Wanderlust from 2014, which has beefy stays – this may not work on other racks).

From the beginning, recording on the Fly6 was problematic. I’d get back from a ride and discover that it had just stopped recording partway through, even though it had plenty of battery life. I had to factory reset it about every 2-3 weeks at first, then discovered that it would also randomly start working without any intervention, so I’d just cross my fingers and hope it was recording. It gave up the ghost about 6 months out of warranty.

About 9 months out of warranty, the Fly12 started randomly missing video segments, not naming them in order, etc. A month later, it would only take a series of stills every few minutes, with no sound. This rendered the camera unusable for my purposes, as it only takes a few seconds for, hypothetically speaking, some asscricket in an Audi to blast past me on the left, driving into oncoming traffic while I’m clearly signalling a left turn, and it’s unlikely the cam would catch that in its random captures. Multiple attempts to fix this on my own (factory reset, reformatting the SD card, trying a new SD card, etc) bore no fruit, so I contacted support.

Support was nice enough; they pointed out first thing that it was out of warranty but they’d try to help anyway, and in the end offered me a 20% off coupon if I wanted to purchase a replacement camera, but ultimately were unable to fix the issue.

I did end up purchasing Cycliq replacements, despite my not-great experience with these, mainly because I wasn’t able to find anything with equivalent battery life. I’ve been using the Fly12 Sport and the Fly6CE Gen3 for about a month and a half now; they’re better so far but I’m reserving judgment for a few more months.

13 August, 2022

How to fix ‘data type [x] has no default operator class for access method [y]’ error messages

by gorthx

I recently needed to create a GiST index on an integer array column for performance comparison purposes.

I expected it to be easy:

gabrielle=# \d my_table
                             Table "public.my_table"
 Column |   Type    | Collation | Nullable |               Default
--------+-----------+-----------+----------+--------------------------------------
 id     | bigint    |           | not null | nextval('my_table_id_seq'::regclass)
 my_ids | integer[] |           |          |
Indexes:
    "my_table_pkey" PRIMARY KEY, btree (id)

gabrielle=# CREATE INDEX CONCURRENTLY idx_try_gist ON my_table USING gist (my_ids);
ERROR:  data type integer[] has no default operator class for access method "gist"
HINT:  You must specify an operator class for the index or define a default operator class for the data type.

.


If you’re here, you’re probably wondering “What in Sam Hill is an ‘operator class'”?

An operator manipulates data in some way. You’re likely already familiar with arithmetic operators, e.g. ‘+’, ‘-‘, and > and <. If you’ve done any programming, you know about logical operators AND and OR. Many other operators are available in Postgres, such as the <@ and @> array comparison operators (aka the “ice cream cones”).

Comparison operators are grouped into classes containing the operators needed to build an index on a column of a specific datatype. In my case, I need to find the appropriate operator class for building a gist index on an array of integers. (More at the docs here: https://www.postgresql.org/docs/current/indexes-opclass.html)

Finding the correct operator took just a little bit of time; here’s how I did it:

First, go to the Postgres docs and find the handy query that shows which operator classes are already available in your database: https://www.postgresql.org/docs/current/indexes-opclass.html

(I limited my example to show only gist operators because that’s my particular problem here):

SELECT am.amname AS index_method,
       opc.opcname AS opclass_name,
       opc.opcintype::regtype AS indexed_type,
       opc.opcdefault AS is_default
    FROM pg_am am, pg_opclass opc
    WHERE opc.opcmethod = am.oid
    AND am.amname = 'gist'
    ORDER BY index_method, opclass_name;

 index_method |     opclass_name     |        indexed_type         | is_default
--------------+----------------------+-----------------------------+------------
 ...
 gist         | gist_int2_ops        | smallint                    | t
 gist         | gist_int4_ops        | integer                     | t
 gist         | gist_int8_ops        | bigint                      | t
 ...

gist_int4_ops looks promising, but alas:

gabrielle=# CREATE INDEX CONCURRENTLY idx_try_gist ON my_table USING gist (my_ids gist_int4_ops);
ERROR:  operator class "gist_int4_ops" does not accept data type integer[]

So I don’t have the correct operator installed. Since I’m on Postgres, “there’s an extension for that”, so let’s go hunting!

gabrielle=# SELECT * FROM pg_available_extensions WHERE comment ILIKE '%gist%';
    name    | default_version | installed_version |                    comment
------------+-----------------+-------------------+-----------------------------------------------
 btree_gist | 1.5             | 1.5               | support for indexing common datatypes in GiST

I already know btree_gist doesn’t have what I want, so let’s try another tack:

gabrielle=# SELECT * FROM pg_available_extensions WHERE comment ILIKE '%int%';
         name         | default_version | installed_version |                                                       comment
----------------------+-----------------+-------------------+---------------------------------------------------------------------------------------------------------------------
 refint               | 1.0             | [null]            | functions for implementing referential integrity (obsolete)
 address_standardizer | 3.0.5           | [null]            | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step.
 seg                  | 1.3             | [null]            | data type for representing line segments or floating-point intervals
 intagg               | 1.1             | [null]            | integer aggregator and enumerator (obsolete)
 isn                  | 1.2             | [null]            | data types for international product numbering standards
 lo                   | 1.1             | [null]            | Large Object maintenance
 intarray             | 1.2             | [null]            | functions, operators, and index support for 1-D arrays of integers
 dict_int             | 1.0             | [null]            | text search dictionary template for integers
 amcheck              | 1.2             | 1.2               | functions for verifying relation integrity

Aha, I bet intarray has what I need! Create that extension:

gabrielle=# CREATE EXTENSION intarray;
CREATE EXTENSION

… and see which operators I have now:

SELECT am.amname AS index_method,
       opc.opcname AS opclass_name,
       opc.opcintype::regtype AS indexed_type,
       opc.opcdefault AS is_default
    FROM pg_am am, pg_opclass opc
    WHERE opc.opcmethod = am.oid
    AND am.amname = 'gist'
    ORDER BY index_method, opclass_name;

 index_method |     opclass_name     |        indexed_type         | is_default
--------------+----------------------+-----------------------------+------------
...
 gist         | gist__int_ops        | integer[]                   | t
 gist         | gist__intbig_ops     | integer[]                   | f
...

gist__int_ops must be it!

gabrielle=# CREATE INDEX CONCURRENTLY idx_try_gist ON my_table USING gist (my_ids gist__int_ops);
CREATE INDEX

Voila!*


* In practice, I got this error message:

gabrielle=# CREATE INDEX CONCURRENTLY idx_try_gist ON mytable USING gist (user_ids gist__int_ops);
ERROR:  data is too sparse, recreate index using gist__intbig_ops opclass instead
Time: 2121.985 ms (00:02.122)

“int” vs “intbig” here refers to the size and cardinality of the dataset, not the integer vs bigint datatypes.

This was easily fixed by following the instructions in the error message:

gabrielle=# DROP INDEX CONCURRENTLY idx_try_gist;
gabrielle=# CREATE INDEX CONCURRENTLY tmp_index_delayed_jobs_on_group_keys_gist ON delayed_jobs_tmp_copy_0618 USING gist (group_keys gist__intbig_ops);
Tags:
8 February, 2022

Install Postgres v14 + -contrib on AWS Linux 2

by gorthx

Recently, I needed to install Postgres version 14 along with its ‘contrib’ package on an Amazon Linux 2 host.

This ‘contrib’ package requires python3.6, but 3.7 is installed by default. You’d think it would be easy to remove the pre-installed version and install the correct one, but it wasn’t obvious (to me anyway) and I ended up in that whirlpool of install/uninstall.

The key was to enable the python-3 “topic” from the ‘amazon-linux-extras’ package, and explicitly disable the ‘amzn2-core’ repo. Step-by-step instructions follow.

For reference and search engine purposes, here’s the error I got:

# sudo yum install postgresql14 postgresql14-contrib

Error: Package: postgresql14-contrib-14.1-1PGDG.rhel7.x86_64 (pgdg14)
           Requires: libpython3.6m.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

And the installed python version:

# python3 -V
Python 3.7.10

The instructions:

  1. Fire up your instance.

  2. Install the ‘amazon-linux-extras’ repo if it isn’t already:

    # which amazon-linux-extras

    if it's not installed:
    # sudo yum install amazon-linux-extras

  3. Install the ‘epel’ package from the ‘amazon-linux-extras’ repo:

    sudo amazon-linux-extras install epel

  4. Get the Postgres GPG key and the URL for your desired repo.
    Visit https://download.postgresql.org/pub/repos/yum/

    a. Find the appropriate key
    e.g. “RPM-GPG-KEY-PGDG-14”
    Copy the link; it goes in the ‘gpgkey’ setting in your repo (next step).

    b. Select your version, then architecture (you want RedHat here), then appropriate OS version
    e.g. 14 -> redhat -> rhel-7-x86_64
    Copy that link as well; it goes in the ‘baseurl’ setting in your repo.

  5. Configure the ‘pgdg’ repo (you’ll need to be root to do this). Use your preferred method to add the following lines to ‘/etc/yum.repos.d/pgdg.repo’ (adjust if you are installing a different version):

    [pgdg14]
    name=PostgreSQL 14 for RHEL/CentOS7 - x86_64
    gpgkey=https://download.postgresql.org/pub/repos/yum/RPM-GPG-KEY-PGDG-14
    baseurl=https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-7-x86_64/
    enabled=1
    gpgcheck=1


  6. Verify the Postgres v14 package is available. List packages available in the ‘pgdg14’ repo you just configured:

    yum --disablerepo="*" --enablerepo="pgdg14" list available

    You should see some friends here:

    postgresql14.x86_64
    postgresql14-contrib.x86_64
    postgresql14-devel.x86_64
    postgresql14-docs.x86_64


  7. Remove the existing python3:

    sudo yum remove python3

  8. Install python3.6 from the ‘amazon-linux-extras’ repo:

    sudo amazon-linux-extras enable python3
    sudo yum clean metadata
    sudo yum install python3-3.6.* --disablerepo=amzn2-core


    Disabling the ‘amzn2-core’ repo was the trick here!
    If you get this error, but still can’t install ‘-contrib’, that’s a sign that you missed the ‘–disablerepo=amzn2-core’ option:

    Package matching python3-3.6.2-3.amzn2.0.3.x86_64 already installed. Checking for update.
    Nothing to do


  9. Install Postgres, with -contrib:

    sudo yum install postgresql14 postgresql14-contrib

  10. ‘initdb’, configure (not shown), start.

    cd /usr/pgsql-14/bin
    sudo ./postgresql-14-setup initdb
    sudo systemctl start postgresql-14


  11. You are now free to go about your day. I’ll leave you with this quote from ServerFault:

    “I’m not sure if this is helpful, but I’ve largely given up on Amazon Linux as it’s too difficult to get some things working.”1

Thanks to Jeff Frost for helping me find my way.

1 – https://serverfault.com/questions/1081509/error-getting-the-correct-python3-dependency

10 October, 2016

PgOpen 2016 recap

by gorthx

It’s been a few weeks now since we wrapped up the most recent PostgresOpen in Dallas.

Marianne Bellotti’s keynote was a fantastic and very interesting look into the current state of government IT.  I hope we get to see her at more conferences!

Other quick highlights:
– 2ndQ has a running beta of WordPress on Postgres! (Simon’s keynote)
– Early decisions by the Postgres team (e.g. to spend time working on subselects) directly resulted in my choosing Postgres over MySQL a few years later. (From Greg Stark’s “A Look at the Elephant’s Tail”)
– Keith Fiske (“Managing OS Provided PostgreSQL Packages”) gave a rundown of the vagaries of using Pg packages on different OSes, something that has occasionally driven me around the bend.

I got the most bang for my buck from the combination of Magnus’s Haganders “A Look at the Elephant’s Trunk” and Denish Patel’s “Advanced Postgres Monitoring” (slide deck here). Denish has a thorough list of metrics you should collect, and there are some changes coming in 9.6 to pg_stat_activity that may break current monitoring scripts that use it.

(Also: I am VERY EXCITED about pg_stat_progress_vacuum!)

(And I learned that NumPy is pronounced num-pie and not “rhymes with lumpy”.)

Bonus speaker tip: Many years ago I was in a talk in which the presenter asked “Does everybody understand [x]?” When noone responded, he continued “Just in case we have some people here who are too shy to raise their hands, [x] is…” and gave a brief explanation, enough for newer folks to follow along for the rest of the session. This is an excellent inclusion tactic. Scott Meade demonstrated another way to handle this situation: “So we’re at level set, [x] refers to …”

Thanks to our attendees, speakers, and sponsors for making this conference what it is. The committee is meeting this week to make plans for next year.  Watch the conference website and twitter feed for updates.

Tags:
31 August, 2016

My Picks for PgOpen 2016

by gorthx

We’re back in Dallas for our 6th year of PostgresOpen!  This year, we’re out at the Westin Galleria, which I’m told has an ice rink. (Is it too late for someone bring the Slonik costume?)

In addition to choosing the talks for the program, part of my conference committee duties involve being a room host. I won’t get to see everything I want, but here’s five that are at the top of my “wish list”:

  • Denish Patel’s Advanced Postgres monitoring – always looking for ways to do this better
  • *Of course* I will be in Grant McAlister’s RDS talk
  • Shaun Thomas clearly and simply explains even advanced topics, and I usually come out of the room thinking “BOY am I glad I don’t have to deal with THAT situation”. This year is Elephant Herd as a Service
  • Sounds like there are some “interesting” stories behind Ilya Kosmodemiansky’s PostgreSQL Worst Practices
  • And we are really excited to have Alexander Korotkov and Oleg Bartunov join our conference to talk about RUM indexes!

A couple of other sessions I recommend:

Check out Dimitri Fontaine’s You’d Better Have Tested Backups ; it’s pretty hair-raising.

Want to contribute back to Postgres, but don’t know where to start? Review some patches! David Steele will give you some pointers.

We will be having Lightning Talks, as usual. If you want to get the jump on everyone else, you can sign up here.

What’s new this year: instead of an auction, we’re having a golf tournament.  I’ve never golfed before and am looking forward to giving it a try. Come join us, it’ll be a hoot. (There is a dress code so pack accordingly if you think you might want to play.)

See you in Dallas in a few weeks! Stop by the reg desk and say hello.