You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
yzl 93958d0fb0
zabbix6.0
1 year ago
..
postgresql zabbix6.0 1 year ago
README.md zabbix6.0 1 year ago
template_db_postgresql.conf zabbix6.0 1 year ago
template_db_postgresql.yaml zabbix6.0 1 year ago

README.md

PostgreSQL by Zabbix agent

Overview

This template is designed for the deployment of PostgreSQL monitoring by Zabbix via Zabbix agent and uses user parameters to run SQL queries with the psql command-line tool.

Requirements

Zabbix version: 7.0 and higher.

Tested versions

This template has been tested on:

  • PostgreSQL 10-15

Configuration

Zabbix should be configured according to the instructions in the Templates out of the box section.

Setup

Note:

  • The template requires pg_isready and psql utilities to be installed on the same host with Zabbix agent.
  1. Deploy Zabbix agent and create the PostgreSQL user for monitoring (<password> at your discretion) with proper access rights to your PostgreSQL instance.

For PostgreSQL version 10 and above:

CREATE USER zbx_monitor WITH PASSWORD '<PASSWORD>' INHERIT;
GRANT pg_monitor TO zbx_monitor;

For PostgreSQL version 9.6 and below:

CREATE USER zbx_monitor WITH PASSWORD '<PASSWORD>';
GRANT SELECT ON pg_stat_database TO zbx_monitor;

-- To collect WAL metrics, the user must have a `superuser` role.
ALTER USER zbx_monitor WITH SUPERUSER;
  1. Copy the postgresql/ directory to the zabbix user home directory - /var/lib/zabbix/. The postgresql/ directory contains the files with SQL queries needed to obtain metrics from PostgreSQL instance.

If the home directory of the zabbix user doesn't exist, create it first:

mkdir -m u=rwx,g=rwx,o= -p /var/lib/zabbix
chown zabbix:zabbix /var/lib/zabbix
  1. Copy the template_db_postgresql.conf file, containing user parameters, to the Zabbix agent configuration directory /etc/zabbix/zabbix_agentd.d/ and restart Zabbix agent service.

Note: if you want to use SSL/TLS encryption to protect communications with the remote PostgreSQL instance, you can modify the connection string in user parameters. For example, to enable required encryption in transport mode without identity checks you could append ?sslmode=required to the end of the connection string for all keys that use psql:

UserParameter=pgsql.bgwriter[*], psql -qtAX postgresql://"$3":"$4"@"$1":"$2"/"$5"?sslmode=required -f "/var/lib/zabbix/postgresql/pgsql.bgwriter.sql"

Consult the PostgreSQL documentation about protection modes and client connection parameters.

Also, it is assumed that you set up the PostgreSQL instance to work in the desired encryption mode. Check the PostgreSQL documentation for details.

  1. Edit the pg_hba.conf configuration file to allow connections for the user zbx_monitor. For example, you could add one of the following rows to allow local TCP connections from the same host:
# TYPE  DATABASE        USER            ADDRESS                 METHOD
  host       all        zbx_monitor     localhost               trust
  host       all        zbx_monitor     127.0.0.1/32            md5
  host       all        zbx_monitor     ::1/128                 scram-sha-256

For more information please read the PostgreSQL documentation https://www.postgresql.org/docs/current/auth-pg-hba-conf.html.

  1. Specify the host name or IP address in the {$PG.HOST} macro. Adjust the port number with {$PG.PORT} macro if needed.

  2. Set the password that you specified in step 1 in the macro {$PG.PASSWORD}.

Macros used

Name Description Default
{$PG.CACHE_HITRATIO.MIN.WARN}

Minimum cache hit ratio percentage for trigger expression.

90
{$PG.CHECKPOINTS_REQ.MAX.WARN}

Maximum required checkpoint occurrences for trigger expression.

5
{$PG.CONFLICTS.MAX.WARN}

Maximum number of recovery conflicts for trigger expression.

0
{$PG.CONN_TOTAL_PCT.MAX.WARN}

Maximum percentage of current connections for trigger expression.

90
{$PG.DATABASE}

Default PostgreSQL database for the connection.

postgres
{$PG.DEADLOCKS.MAX.WARN}

Maximum number of detected deadlocks for trigger expression.

0
{$PG.FROZENXID_PCT_STOP.MIN.HIGH}

Minimum frozen XID before stop percentage for trigger expression.

75
{$PG.HOST}

Hostname or IP of PostgreSQL host.

localhost
{$PG.LLD.FILTER.DBNAME}

Filter of discoverable databases.

.+
{$PG.LOCKS.MAX.WARN}

Maximum number of locks for trigger expression.

100
{$PG.PING_TIME.MAX.WARN}

Maximum time of connection response for trigger expression.

1s
{$PG.PORT}

PostgreSQL service port.

5432
{$PG.QUERY_ETIME.MAX.WARN}

Execution time limit for count of slow queries.

30
{$PG.REPL_LAG.MAX.WARN}

Maximum replication lag time for trigger expression.

10m
{$PG.SLOW_QUERIES.MAX.WARN}

Slow queries count threshold for a trigger.

5
{$PG.USER}

PostgreSQL username.

zbx_monitor
{$PG.PASSWORD}

PostgreSQL user password.

<Put the password here>

Items

Name Description Type Key and additional info
Bgwriter: Buffers allocated per second

Number of buffers allocated per second.

Dependent item pgsql.bgwriter.buffers_alloc.rate

Preprocessing

  • JSON Path: $.buffers_alloc

  • Change per second
Bgwriter: Buffers written directly by a backend per second

Number of buffers written directly by a backend per second.

Dependent item pgsql.bgwriter.buffers_backend.rate

Preprocessing

  • JSON Path: $.buffers_backend

  • Change per second
Bgwriter: Times a backend executed its own fsync per second

Number of times a backend had to execute its own fsync call per second (normally the background writer handles those even when the backend does its own write).

Dependent item pgsql.bgwriter.buffers_backend_fsync.rate

Preprocessing

  • JSON Path: $.buffers_backend_fsync

  • Change per second
Checkpoint: Buffers written during checkpoints per second

Number of buffers written during checkpoints per second.

Dependent item pgsql.bgwriter.buffers_checkpoint.rate

Preprocessing

  • JSON Path: $.buffers_checkpoint

  • Change per second
Checkpoint: Buffers written by the background writer per second

Number of buffers written by the background writer per second.

Dependent item pgsql.bgwriter.buffers_clean.rate

Preprocessing

  • JSON Path: $.buffers_clean

  • Change per second
Checkpoint: Requested per second

Number of requested checkpoints that have been performed per second.

Dependent item pgsql.bgwriter.checkpoints_req.rate

Preprocessing

  • JSON Path: $.checkpoints_req

  • Change per second
Checkpoint: Scheduled per second

Number of scheduled checkpoints that have been performed per second.

Dependent item pgsql.bgwriter.checkpoints_timed.rate

Preprocessing

  • JSON Path: $.checkpoints_timed

  • Change per second
Checkpoint: Checkpoint sync time per second

Total amount of time per second that has been spent in the portion of checkpoint processing where files are synchronized to disk.

Dependent item pgsql.bgwriter.checkpoint_sync_time.rate

Preprocessing

  • JSON Path: $.checkpoint_sync_time

  • Custom multiplier: 0.001

  • Change per second
Checkpoint: Checkpoint write time per second

Total amount of time per second that has been spent in the portion of checkpoint processing where files are written to disk.

Dependent item pgsql.bgwriter.checkpoint_write_time.rate

Preprocessing

  • JSON Path: $.checkpoint_write_time

  • Custom multiplier: 0.001

  • Change per second
Bgwriter: Number of bgwriter cleaning scan stopped per second

Number of times the background writer stopped a cleaning scan because it had written too many buffers per second.

Dependent item pgsql.bgwriter.maxwritten_clean.rate

Preprocessing

  • JSON Path: $.maxwritten_clean

  • Change per second
PostgreSQL: Get bgwriter

Collect all metrics from pg_stat_bgwriter:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-BGWRITER-VIEW

Zabbix agent pgsql.bgwriter["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Cache hit ratio, %

Cache hit ratio.

Zabbix agent pgsql.cache.hit["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Config hash

PostgreSQL configuration hash.

Zabbix agent pgsql.config.hash["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]

Preprocessing

  • Discard unchanged with heartbeat: 1h

Connections sum: Active

Total number of connections executing a query.

Dependent item pgsql.connections.sum.active

Preprocessing

  • JSON Path: $.active

Connections sum: Idle

Total number of connections waiting for a new client command.

Dependent item pgsql.connections.sum.idle

Preprocessing

  • JSON Path: $.idle

Connections sum: Idle in transaction

Total number of connections in a transaction state but not executing a query.

Dependent item pgsql.connections.sum.idle_in_transaction

Preprocessing

  • JSON Path: $.idle_in_transaction

Connections sum: Prepared

Total number of prepared transactions:

https://www.postgresql.org/docs/current/sql-prepare-transaction.html

Dependent item pgsql.connections.sum.prepared

Preprocessing

  • JSON Path: $.prepared

Connections sum: Total

Total number of connections.

Dependent item pgsql.connections.sum.total

Preprocessing

  • JSON Path: $.total

Connections sum: Total, %

Total number of connections, in percentage.

Dependent item pgsql.connections.sum.total_pct

Preprocessing

  • JSON Path: $.total_pct

Connections sum: Waiting

Total number of waiting connections:

https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE

Dependent item pgsql.connections.sum.waiting

Preprocessing

  • JSON Path: $.waiting

PostgreSQL: Get connections sum

Collect all metrics from pg_stat_activity:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

Zabbix agent pgsql.connections.sum["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Get dbstat

Collect all metrics from pg_stat_database per database:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-DATABASE-VIEW

Zabbix agent pgsql.dbstat["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Get locks

Collect all metrics from pg_locks per database:

https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-TABLES

Zabbix agent pgsql.locks["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Ping time

Used to get the SELECT 1 query execution time.

Zabbix agent pgsql.ping.time["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]

Preprocessing

  • Regular expression: Time:\s+(\d+\.\d+)\s+ms \1

  • Custom multiplier: 0.001

PostgreSQL: Ping

Used to test a connection to see if it is alive. It is set to 0 if the instance doesn't accept the connections.

Zabbix agent pgsql.ping["{$PG.HOST}","{$PG.PORT}"]

Preprocessing

  • JavaScript: return value.search(/accepting connections/)>0 ? 1 : 0

  • Discard unchanged with heartbeat: 1h

PostgreSQL: Get queries

Collect all metrics by query execution time.

Zabbix agent pgsql.queries["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}","{$PG.QUERY_ETIME.MAX.WARN}"]
PostgreSQL: Replication: Standby count

Number of standby servers.

Zabbix agent pgsql.replication.count["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Replication: Lag in seconds

Replication lag with master, in seconds.

Zabbix agent pgsql.replication.lag.sec["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Replication: Recovery role

Replication role: 1 — recovery is still in progress (standby mode), 0 — master mode.

Zabbix agent pgsql.replication.recovery_role["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Replication: Status

Replication status: 0 — streaming is down, 1 — streaming is up, 2 — master mode.

Zabbix agent pgsql.replication.status["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
Transactions: Max active transaction time

Current max active transaction time.

Dependent item pgsql.transactions.active

Preprocessing

  • JSON Path: $.active

Transactions: Max idle transaction time

Current max idle transaction time.

Dependent item pgsql.transactions.idle

Preprocessing

  • JSON Path: $.idle

Transactions: Max prepared transaction time

Current max prepared transaction time.

Dependent item pgsql.transactions.prepared

Preprocessing

  • JSON Path: $.prepared

Transactions: Max waiting transaction time

Current max waiting transaction time.

Dependent item pgsql.transactions.waiting

Preprocessing

  • JSON Path: $.waiting

PostgreSQL: Get transactions

Collect metrics by transaction execution time.

Zabbix agent pgsql.transactions["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Uptime

Time since the server started.

Zabbix agent pgsql.uptime["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
PostgreSQL: Version

PostgreSQL version.

Zabbix agent pgsql.version["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]

Preprocessing

  • Discard unchanged with heartbeat: 1d

WAL: Segments count

Number of WAL segments.

Dependent item pgsql.wal.count

Preprocessing

  • JSON Path: $.count

PostgreSQL: Get WAL

Collect write-ahead log (WAL) metrics.

Zabbix agent pgsql.wal.stat["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]
WAL: Bytes written

WAL write, in bytes.

Dependent item pgsql.wal.write

Preprocessing

  • JSON Path: $.write

  • Change per second

Triggers

Name Description Expression Severity Dependencies and additional info
PostgreSQL: Required checkpoints occur too frequently

Checkpoints are points in the sequence of transactions at which it is guaranteed that the heap and index data files have been updated with all information written before that checkpoint. At checkpoint time, all dirty data pages are flushed to disk and a special checkpoint record is written to the log file.
https://www.postgresql.org/docs/current/wal-configuration.html

last(/PostgreSQL by Zabbix agent/pgsql.bgwriter.checkpoints_req.rate) > {$PG.CHECKPOINTS_REQ.MAX.WARN} Average
PostgreSQL: Failed to get items

Zabbix has not received any data for items for the last 30 minutes.

nodata(/PostgreSQL by Zabbix agent/pgsql.bgwriter["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],30m) = 1 Warning Depends on:
  • PostgreSQL: Service is down
PostgreSQL: Cache hit ratio too low

Cache hit ratio is lower than {$PG.CACHE_HITRATIO.MIN.WARN} for 5m.

max(/PostgreSQL by Zabbix agent/pgsql.cache.hit["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],5m) < {$PG.CACHE_HITRATIO.MIN.WARN} Warning
PostgreSQL: Configuration has changed

PostgreSQL configuration has changed.

last(/PostgreSQL by Zabbix agent/pgsql.config.hash["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],#1)<>last(/PostgreSQL by Zabbix agent/pgsql.config.hash["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],#2) and length(last(/PostgreSQL by Zabbix agent/pgsql.config.hash["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]))>0 Info
PostgreSQL: Total number of connections is too high

Total number of current connections exceeds the limit of {$PG.CONN_TOTAL_PCT.MAX.WARN}% out of the maximum number of concurrent connections to the database server (the "max_connections" setting).

min(/PostgreSQL by Zabbix agent/pgsql.connections.sum.total_pct,5m) > {$PG.CONN_TOTAL_PCT.MAX.WARN} Average
PostgreSQL: Response too long

Response is taking too long (over {$PG.PING_TIME.MAX.WARN} for 5m).

min(/PostgreSQL by Zabbix agent/pgsql.ping.time["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],5m) > {$PG.PING_TIME.MAX.WARN} Average Depends on:
  • PostgreSQL: Service is down
PostgreSQL: Service is down

Last test of a connection was unsuccessful.

last(/PostgreSQL by Zabbix agent/pgsql.ping["{$PG.HOST}","{$PG.PORT}"]) = 0 High
PostgreSQL: Streaming lag with {#MASTER} is too high

Replication lag with master is higher than {$PG.REPL_LAG.MAX.WARN} for 5m.

min(/PostgreSQL by Zabbix agent/pgsql.replication.lag.sec["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],5m) > {$PG.REPL_LAG.MAX.WARN} Average
PostgreSQL: Replication is down

Replication is enabled and data streaming was down for 5m.

max(/PostgreSQL by Zabbix agent/pgsql.replication.status["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],5m)=0 Average
PostgreSQL: Service has been restarted

PostgreSQL uptime is less than 10 minutes.

last(/PostgreSQL by Zabbix agent/pgsql.uptime["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]) < 10m Average
PostgreSQL: Version has changed last(/PostgreSQL by Zabbix agent/pgsql.version["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],#1)<>last(/PostgreSQL by Zabbix agent/pgsql.version["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"],#2) and length(last(/PostgreSQL by Zabbix agent/pgsql.version["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]))>0 Info

LLD rule Database discovery

Name Description Type Key and additional info
Database discovery

Discovers databases (DB) in the database management system (DBMS), except:

- templates;

- default "postgres" DB;

- DBs that do not allow connections.

Zabbix agent pgsql.discovery.db["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}"]

Item prototypes for Database discovery

Name Description Type Key and additional info
DB [{#DBNAME}]: Get dbstat

Get dbstat metrics for database "{#DBNAME}".

Dependent item pgsql.dbstat.get_metrics["{#DBNAME}"]

Preprocessing

  • JSON Path: $['{#DBNAME}']

    Custom on fail: Discard value

DB [{#DBNAME}]: Get queries

Get queries metrics for database "{#DBNAME}".

Dependent item pgsql.queries.get_metrics["{#DBNAME}"]

Preprocessing

  • JSON Path: $['{#DBNAME}']

    Custom on fail: Discard value

DB [{#DBNAME}]: Database size

Database size.

Zabbix agent pgsql.db.size["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}","{#DBNAME}"]
DB [{#DBNAME}]: Blocks hit per second

Total number of times per second disk blocks were found already in the buffer cache, so that a read was not necessary.

Dependent item pgsql.dbstat.blks_hit.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.blks_hit

  • Change per second
DB [{#DBNAME}]: Disk blocks read per second

Total number of disk blocks read per second in this database.

Dependent item pgsql.dbstat.blks_read.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.blks_read

  • Change per second
DB [{#DBNAME}]: Detected conflicts per second

Total number of queries canceled due to conflicts with recovery in this database per second.

Dependent item pgsql.dbstat.conflicts.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.conflicts

  • Change per second
DB [{#DBNAME}]: Detected deadlocks per second

Total number of detected deadlocks in this database per second.

Dependent item pgsql.dbstat.deadlocks.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.deadlocks

  • Change per second
DB [{#DBNAME}]: Temp_bytes written per second

Total amount of data written to temporary files by queries in this database.

Dependent item pgsql.dbstat.temp_bytes.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.temp_bytes

  • Change per second
DB [{#DBNAME}]: Temp_files created per second

Total number of temporary files created by queries in this database.

Dependent item pgsql.dbstat.temp_files.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.temp_files

  • Change per second
DB [{#DBNAME}]: Tuples deleted per second

Total number of rows deleted by queries in this database per second.

Dependent item pgsql.dbstat.tup_deleted.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tup_deleted

  • Change per second
DB [{#DBNAME}]: Tuples fetched per second

Total number of rows fetched by queries in this database per second.

Dependent item pgsql.dbstat.tup_fetched.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tup_fetched

  • Change per second
DB [{#DBNAME}]: Tuples inserted per second

Total number of rows inserted by queries in this database per second.

Dependent item pgsql.dbstat.tup_inserted.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tup_inserted

  • Change per second
DB [{#DBNAME}]: Tuples returned per second

Number of rows returned by queries in this database per second.

Dependent item pgsql.dbstat.tup_returned.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tup_returned

  • Change per second
DB [{#DBNAME}]: Tuples updated per second

Total number of rows updated by queries in this database per second.

Dependent item pgsql.dbstat.tup_updated.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tup_updated

  • Change per second
DB [{#DBNAME}]: Commits per second

Number of transactions in this database that have been committed per second.

Dependent item pgsql.dbstat.xact_commit.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.xact_commit

  • Change per second
DB [{#DBNAME}]: Rollbacks per second

Total number of transactions in this database that have been rolled back.

Dependent item pgsql.dbstat.xact_rollback.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.xact_rollback

  • Change per second
DB [{#DBNAME}]: Frozen XID before autovacuum, %

Preventing Transaction ID Wraparound Failures:

https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

Dependent item pgsql.frozenxid.prc_before_av["{#DBNAME}"]

Preprocessing

  • JSON Path: $.prc_before_av

DB [{#DBNAME}]: Frozen XID before stop, %

Preventing Transaction ID Wraparound Failures:

https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

Dependent item pgsql.frozenxid.prc_before_stop["{#DBNAME}"]

Preprocessing

  • JSON Path: $.prc_before_stop

DB [{#DBNAME}]: Get frozen XID Zabbix agent pgsql.frozenxid["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"]
DB [{#DBNAME}]: Num of locks total

Total number of locks in this database.

Dependent item pgsql.locks.total["{#DBNAME}"]

Preprocessing

  • JSON Path: $['{#DBNAME}'].total

DB [{#DBNAME}]: Queries slow maintenance count

Slow maintenance query count for this database.

Dependent item pgsql.queries.mro.slow_count["{#DBNAME}"]

Preprocessing

  • JSON Path: $.mro_slow_count

DB [{#DBNAME}]: Queries max maintenance time

Max maintenance query time for this database.

Dependent item pgsql.queries.mro.time_max["{#DBNAME}"]

Preprocessing

  • JSON Path: $.mro_time_max

DB [{#DBNAME}]: Queries sum maintenance time

Sum maintenance query time for this database.

Dependent item pgsql.queries.mro.time_sum["{#DBNAME}"]

Preprocessing

  • JSON Path: $.mro_time_sum

DB [{#DBNAME}]: Queries slow query count

Slow query count for this database.

Dependent item pgsql.queries.query.slow_count["{#DBNAME}"]

Preprocessing

  • JSON Path: $.query_slow_count

DB [{#DBNAME}]: Queries max query time

Max query time for this database.

Dependent item pgsql.queries.query.time_max["{#DBNAME}"]

Preprocessing

  • JSON Path: $.query_time_max

DB [{#DBNAME}]: Queries sum query time

Sum query time for this database.

Dependent item pgsql.queries.query.time_sum["{#DBNAME}"]

Preprocessing

  • JSON Path: $.query_time_sum

DB [{#DBNAME}]: Queries slow transaction count

Slow transaction query count for this database.

Dependent item pgsql.queries.tx.slow_count["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tx_slow_count

DB [{#DBNAME}]: Queries max transaction time

Max transaction query time for this database.

Dependent item pgsql.queries.tx.time_max["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tx_time_max

DB [{#DBNAME}]: Queries sum transaction time

Sum transaction query time for this database.

Dependent item pgsql.queries.tx.time_sum["{#DBNAME}"]

Preprocessing

  • JSON Path: $.tx_time_sum

DB [{#DBNAME}]: Index scans per second

Number of index scans in the database per second.

Dependent item pgsql.scans.idx.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.idx

  • Change per second
DB [{#DBNAME}]: Sequential scans per second

Number of sequential scans in this database per second.

Dependent item pgsql.scans.seq.rate["{#DBNAME}"]

Preprocessing

  • JSON Path: $.seq

  • Change per second
DB [{#DBNAME}]: Get scans

Number of scans done for table/index in this database.

Zabbix agent pgsql.scans["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"]

Trigger prototypes for Database discovery

Name Description Expression Severity Dependencies and additional info
DB [{#DBNAME}]: Too many recovery conflicts

The primary and standby servers are in many ways loosely connected. Actions on the primary will have an effect on the standby. As a result, there is potential for negative interactions or conflicts between them.
https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-CONFLICT

min(/PostgreSQL by Zabbix agent/pgsql.dbstat.conflicts.rate["{#DBNAME}"],5m) > {$PG.CONFLICTS.MAX.WARN:"{#DBNAME}"} Average
DB [{#DBNAME}]: Deadlock occurred

Number of deadlocks detected per second exceeds {$PG.DEADLOCKS.MAX.WARN:"{#DBNAME}"} for 5m.

min(/PostgreSQL by Zabbix agent/pgsql.dbstat.deadlocks.rate["{#DBNAME}"],5m) > {$PG.DEADLOCKS.MAX.WARN:"{#DBNAME}"} High
DB [{#DBNAME}]: VACUUM FREEZE is required to prevent wraparound

Preventing Transaction ID Wraparound Failures:
https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

last(/PostgreSQL by Zabbix agent/pgsql.frozenxid.prc_before_stop["{#DBNAME}"])<{$PG.FROZENXID_PCT_STOP.MIN.HIGH:"{#DBNAME}"} Average
DB [{#DBNAME}]: Number of locks is too high min(/PostgreSQL by Zabbix agent/pgsql.locks.total["{#DBNAME}"],5m)>{$PG.LOCKS.MAX.WARN:"{#DBNAME}"} Warning
DB [{#DBNAME}]: Too many slow queries

The number of detected slow queries exceeds the limit of {$PG.SLOW_QUERIES.MAX.WARN:"{#DBNAME}"}.

min(/PostgreSQL by Zabbix agent/pgsql.queries.query.slow_count["{#DBNAME}"],5m)>{$PG.SLOW_QUERIES.MAX.WARN:"{#DBNAME}"} Warning

Feedback

Please report any issues with the template at https://support.zabbix.com

You can also provide feedback, discuss the template, or ask for help at ZABBIX forums