# PostgreSQL by Zabbix agent 2 ## Overview This template is designed for the deployment of PostgreSQL monitoring by Zabbix via Zabbix agent 2 and uses a loadable plugin to run SQL queries. ## Requirements Zabbix version: 7.0 and higher. ## Tested versions This template has been tested on: - PostgreSQL 10-15 ## Configuration > Zabbix should be configured according to the instructions in the [Templates out of the box](https://www.zabbix.com/documentation/7.0/manual/config/templates_out_of_the_box) section. ## Setup 1. Deploy Zabbix agent 2 with the PostgreSQL plugin. Starting with Zabbix versions 6.0.10 / 6.2.4 / 6.4 PostgreSQL metrics are moved to a loadable plugin and require installation of a separate package or [`compilation of the plugin from sources`](https://www.zabbix.com/documentation/7.0/manual/extensions/plugins/build). 2. Create the PostgreSQL user for monitoring (`` at your discretion) and inherit permissions from the default role `pg_monitor`: ```sql CREATE USER zbx_monitor WITH PASSWORD '' INHERIT; GRANT pg_monitor TO zbx_monitor; ``` 3. Edit the `pg_hba.conf` configuration file to allow connections for the user `zbx_monitor`. For example, you could add one of the following rows to allow local TCP connections from the same host: ```bash # TYPE DATABASE USER ADDRESS METHOD host all zbx_monitor localhost trust host all zbx_monitor 127.0.0.1/32 md5 host all zbx_monitor ::1/128 scram-sha-256 ``` For more information please read the PostgreSQL documentation `https://www.postgresql.org/docs/current/auth-pg-hba-conf.html`. 4. Set the connection string for the PostgreSQL instance in the `{$PG.CONNSTRING}` macro as URI, such as ``, or specify the named session - ``. **Note:** if you want to use SSL/TLS encryption to protect communications with the remote PostgreSQL instance, a named session must be used. In that case, the instance URI should be specified in the `Plugins.PostgreSQL.Sessions.*.Uri` parameter in the PostgreSQL plugin configuration files alongside all the encryption parameters (type, cerfiticate/key filepaths if needed etc.). You can check the [`PostgreSQL plugin documentation`](https://git.zabbix.com/projects/AP/repos/postgresql/browse?at=refs%2Fheads%2Frelease%2F7.0) for details about agent plugin parameters and named sessions. Also, it is assumed that you set up the PostgreSQL instance to work in the desired encryption mode. Check the [`PostgreSQL documentation`](https://www.postgresql.org/docs/current/ssl-tcp.html) for details. **Note:** plugin TLS certificate validation relies on checking the Subject Alternative Names (SAN) instead of the Common Name (CN), check the cryptography package [`documentation`](https://pkg.go.dev/crypto/x509) for details. For example, to enable required encryption in transport mode without identity checks you could create the file `/etc/zabbix/zabbix_agent2.d/postgresql_myconn.conf` with the following configuration for the named session `myconn` (replace `` with the address of the PostgreSQL instance): ```bash Plugins.PostgreSQL.Sessions.myconn.Uri=tcp://:5432 Plugins.PostgreSQL.Sessions.myconn.TLSConnect=required ``` Then set the `{$PG.CONNSTRING}` macro to `myconn` to use this named session. 5. Set the password that you specified in step 2 in the macro `{$PG.PASSWORD}`. ### Macros used |Name|Description|Default| |----|-----------|-------| |{$PG.PASSWORD}|

PostgreSQL user password.

|``| |{$PG.CONNSTRING}|

URI or named session of the PostgreSQL instance.

|`tcp://localhost:5432`| |{$PG.USER}|

PostgreSQL username.

|`zbx_monitor`| |{$PG.LLD.FILTER.DBNAME}|

Filter of discoverable databases.

|`.+`| |{$PG.CONN_TOTAL_PCT.MAX.WARN}|

Maximum percentage of current connections for trigger expression.

|`90`| |{$PG.DATABASE}|

Default PostgreSQL database for the connection.

|`postgres`| |{$PG.DEADLOCKS.MAX.WARN}|

Maximum number of detected deadlocks for trigger expression.

|`0`| |{$PG.LLD.FILTER.APPLICATION}|

Filter of discoverable applications.

|`.+`| |{$PG.CONFLICTS.MAX.WARN}|

Maximum number of recovery conflicts for trigger expression.

|`0`| |{$PG.QUERY_ETIME.MAX.WARN}|

Execution time limit for count of slow queries.

|`30`| |{$PG.SLOW_QUERIES.MAX.WARN}|

Slow queries count threshold for a trigger.

|`5`| ### Items |Name|Description|Type|Key and additional info| |----|-----------|----|-----------------------| |PostgreSQL: Get bgwriter|

Collect all metrics from pg_stat_bgwriter:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-BGWRITER-VIEW

|Zabbix agent|pgsql.bgwriter["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get archive|

Collect archive status metrics.

|Zabbix agent|pgsql.archive["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get dbstat|

Collect all metrics from pg_stat_database per database:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-DATABASE-VIEW

|Zabbix agent|pgsql.dbstat["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get dbstat sum|

Collect all metrics from pg_stat_database as sums for all databases:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-DATABASE-VIEW

|Zabbix agent|pgsql.dbstat.sum["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get connections sum|

Collect all metrics from pg_stat_activity:

https://www.postgresql.org/docs/current/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

|Zabbix agent|pgsql.connections["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get WAL|

Collect write-ahead log (WAL) metrics.

|Zabbix agent|pgsql.wal.stat["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get locks|

Collect all metrics from pg_locks per database:

https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-TABLES

|Zabbix agent|pgsql.locks["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Custom queries|

Execute custom queries from file *.sql (check for option Plugins.Postgres.CustomQueriesPath at agent configuration).

|Zabbix agent|pgsql.custom.query["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}",""]| |PostgreSQL: Get replication|

Collect metrics from the pg_stat_replication, which contains information about the WAL sender process, showing statistics about replication to that sender's connected standby server.

|Zabbix agent|pgsql.replication.process["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Get queries|

Collect all metrics by query execution time.

|Zabbix agent|pgsql.queries["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}","{$PG.DATABASE}","{$PG.QUERY_ETIME.MAX.WARN}"]| |WAL: Bytes written|

WAL write, in bytes.

|Dependent item|pgsql.wal.write

**Preprocessing**

  • JSON Path: `$.write`

  • Change per second
| |WAL: Bytes received|

WAL receive, in bytes.

|Dependent item|pgsql.wal.receive

**Preprocessing**

  • JSON Path: `$.receive`

  • Change per second
| |WAL: Segments count|

Number of WAL segments.

|Dependent item|pgsql.wal.count

**Preprocessing**

  • JSON Path: `$.count`

| |Bgwriter: Buffers allocated per second|

Number of buffers allocated per second.

|Dependent item|pgsql.bgwriter.buffers_alloc.rate

**Preprocessing**

  • JSON Path: `$.buffers_alloc`

  • Change per second
| |Bgwriter: Buffers written directly by a backend per second|

Number of buffers written directly by a backend per second.

|Dependent item|pgsql.bgwriter.buffers_backend.rate

**Preprocessing**

  • JSON Path: `$.buffers_backend`

  • Change per second
| |Bgwriter: Number of bgwriter cleaning scan stopped per second|

Number of times the background writer stopped a cleaning scan because it had written too many buffers per second.

|Dependent item|pgsql.bgwriter.maxwritten_clean.rate

**Preprocessing**

  • JSON Path: `$.maxwritten_clean`

  • Change per second
| |Bgwriter: Times a backend executed its own fsync per second|

Number of times a backend had to execute its own fsync call per second (normally the background writer handles those even when the backend does its own write).

|Dependent item|pgsql.bgwriter.buffers_backend_fsync.rate

**Preprocessing**

  • JSON Path: `$.buffers_backend_fsync`

  • Change per second
| |Checkpoint: Buffers written by the background writer per second|

Number of buffers written by the background writer per second.

|Dependent item|pgsql.bgwriter.buffers_clean.rate

**Preprocessing**

  • JSON Path: `$.buffers_clean`

  • Change per second
| |Checkpoint: Buffers written during checkpoints per second|

Number of buffers written during checkpoints per second.

|Dependent item|pgsql.bgwriter.buffers_checkpoint.rate

**Preprocessing**

  • JSON Path: `$.buffers_checkpoint`

  • Change per second
| |Checkpoint: Scheduled per second|

Number of scheduled checkpoints that have been performed per second.

|Dependent item|pgsql.bgwriter.checkpoints_timed.rate

**Preprocessing**

  • JSON Path: `$.checkpoints_timed`

  • Change per second
| |Checkpoint: Requested per second|

Number of requested checkpoints that have been performed per second.

|Dependent item|pgsql.bgwriter.checkpoints_req.rate

**Preprocessing**

  • JSON Path: `$.checkpoints_req`

  • Change per second
| |Checkpoint: Checkpoint write time per second|

Total amount of time per second that has been spent in the portion of checkpoint processing where files are written to disk.

|Dependent item|pgsql.bgwriter.checkpoint_write_time.rate

**Preprocessing**

  • JSON Path: `$.checkpoint_write_time`

  • Custom multiplier: `0.001`

  • Change per second
| |Checkpoint: Checkpoint sync time per second|

Total amount of time per second that has been spent in the portion of checkpoint processing where files are synchronized to disk.

|Dependent item|pgsql.bgwriter.checkpoint_sync_time.rate

**Preprocessing**

  • JSON Path: `$.checkpoint_sync_time`

  • Custom multiplier: `0.001`

  • Change per second
| |Archive: Count of archived files|

Count of archived files.

|Dependent item|pgsql.archive.count_archived_files

**Preprocessing**

  • JSON Path: `$.archived_count`

| |Archive: Count of failed attempts to archive files|

Count of failed attempts to archive files.

|Dependent item|pgsql.archive.failed_trying_to_archive

**Preprocessing**

  • JSON Path: `$.failed_count`

| |Archive: Count of files in archive_status need to archive|

Count of files to archive.

|Dependent item|pgsql.archive.count_files_to_archive

**Preprocessing**

  • JSON Path: `$.count_files`

| |Archive: Size of files need to archive|

Size of files to archive.

|Dependent item|pgsql.archive.size_files_to_archive

**Preprocessing**

  • JSON Path: `$.size_files`

| |Dbstat: Blocks read time|

Time spent reading data file blocks by backends.

|Dependent item|pgsql.dbstat.sum.blk_read_time

**Preprocessing**

  • JSON Path: `$.blk_read_time`

  • Custom multiplier: `0.001`

| |Dbstat: Blocks write time|

Time spent writing data file blocks by backends.

|Dependent item|pgsql.dbstat.sum.blk_write_time

**Preprocessing**

  • JSON Path: `$.blk_write_time`

  • Custom multiplier: `0.001`

| |Dbstat: Checksum failures per second|

Number of data page checksum failures per second detected (or on a shared object), or NULL if data checksums are not enabled. This metric is available since PostgreSQL 12.

|Dependent item|pgsql.dbstat.sum.checksum_failures.rate

**Preprocessing**

  • JSON Path: `$.checksum_failures`

  • Matches regular expression: `^\d*$`

    ⛔️Custom on fail: Set value to: `-2`

  • Change per second

    ⛔️Custom on fail: Set value to: `-1`

| |Dbstat: Committed transactions per second|

Number of transactions that have been committed per second.

|Dependent item|pgsql.dbstat.sum.xact_commit.rate

**Preprocessing**

  • JSON Path: `$.xact_commit`

  • Change per second
| |Dbstat: Conflicts per second|

Number of queries canceled per second due to conflicts with recovery (conflicts occur only on standby servers; see pg_stat_database_conflicts for details).

|Dependent item|pgsql.dbstat.sum.conflicts.rate

**Preprocessing**

  • JSON Path: `$.conflicts`

  • Change per second
| |Dbstat: Deadlocks per second|

Number of deadlocks detected per second.

|Dependent item|pgsql.dbstat.sum.deadlocks.rate

**Preprocessing**

  • JSON Path: `$.deadlocks`

  • Change per second
| |Dbstat: Disk blocks read per second|

Number of disk blocks read per second.

|Dependent item|pgsql.dbstat.sum.blks_read.rate

**Preprocessing**

  • JSON Path: `$.blks_read`

  • Change per second
| |Dbstat: Hit blocks read per second|

Number of times per second disk blocks were found already in the buffer cache

|Dependent item|pgsql.dbstat.sum.blks_hit.rate

**Preprocessing**

  • JSON Path: `$.blks_hit`

  • Change per second
| |Dbstat: Number temp bytes per second|

Total amount of data written per second to temporary files by queries.

|Dependent item|pgsql.dbstat.sum.temp_bytes.rate

**Preprocessing**

  • JSON Path: `$.temp_bytes`

  • Change per second
| |Dbstat: Number temp files per second|

Number of temporary files created by queries per second.

|Dependent item|pgsql.dbstat.sum.temp_files.rate

**Preprocessing**

  • JSON Path: `$.temp_files`

  • Change per second
| |Dbstat: Roll backed transactions per second|

Number of transactions that have been rolled back per second.

|Dependent item|pgsql.dbstat.sum.xact_rollback.rate

**Preprocessing**

  • JSON Path: `$.xact_rollback`

  • Change per second
| |Dbstat: Rows deleted per second|

Number of rows deleted by queries per second.

|Dependent item|pgsql.dbstat.sum.tup_deleted.rate

**Preprocessing**

  • JSON Path: `$.tup_deleted`

  • Change per second
| |Dbstat: Rows fetched per second|

Number of rows fetched by queries per second.

|Dependent item|pgsql.dbstat.sum.tup_fetched.rate

**Preprocessing**

  • JSON Path: `$.tup_fetched`

  • Change per second
| |Dbstat: Rows inserted per second|

Number of rows inserted by queries per second.

|Dependent item|pgsql.dbstat.sum.tup_inserted.rate

**Preprocessing**

  • JSON Path: `$.tup_inserted`

  • Change per second
| |Dbstat: Rows returned per second|

Number of rows returned by queries per second.

|Dependent item|pgsql.dbstat.sum.tup_returned.rate

**Preprocessing**

  • JSON Path: `$.tup_returned`

  • Change per second
| |Dbstat: Rows updated per second|

Number of rows updated by queries per second.

|Dependent item|pgsql.dbstat.sum.tup_updated.rate

**Preprocessing**

  • JSON Path: `$.tup_updated`

  • Change per second
| |Dbstat: Backends connected|

Number of connected backends.

|Dependent item|pgsql.dbstat.sum.numbackends

**Preprocessing**

  • JSON Path: `$.numbackends`

| |Connections sum: Active|

Total number of connections executing a query.

|Dependent item|pgsql.connections.sum.active

**Preprocessing**

  • JSON Path: `$.active`

| |Connections sum: Fastpath function call|

Total number of connections executing a fast-path function.

|Dependent item|pgsql.connections.sum.fastpath_function_call

**Preprocessing**

  • JSON Path: `$.fastpath_function_call`

| |Connections sum: Idle|

Total number of connections waiting for a new client command.

|Dependent item|pgsql.connections.sum.idle

**Preprocessing**

  • JSON Path: `$.idle`

| |Connections sum: Idle in transaction|

Total number of connections in a transaction state but not executing a query.

|Dependent item|pgsql.connections.sum.idle_in_transaction

**Preprocessing**

  • JSON Path: `$.idle_in_transaction`

| |Connections sum: Prepared|

Total number of prepared transactions:

https://www.postgresql.org/docs/current/sql-prepare-transaction.html

|Dependent item|pgsql.connections.sum.prepared

**Preprocessing**

  • JSON Path: `$.prepared`

| |Connections sum: Total|

Total number of connections.

|Dependent item|pgsql.connections.sum.total

**Preprocessing**

  • JSON Path: `$.total`

| |Connections sum: Total, %|

Total number of connections, in percentage.

|Dependent item|pgsql.connections.sum.total_pct

**Preprocessing**

  • JSON Path: `$.total_pct`

| |Connections sum: Waiting|

Total number of waiting connections:

https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE

|Dependent item|pgsql.connections.sum.waiting

**Preprocessing**

  • JSON Path: `$.waiting`

| |Connections sum: Idle in transaction (aborted)|

Total number of connections in a transaction state but not executing a query, and where one of the statements in the transaction caused an error.

|Dependent item|pgsql.connections.sum.idle_in_transaction_aborted

**Preprocessing**

  • JSON Path: `$.idle_in_transaction_aborted`

| |Connections sum: Disabled|

Total number of disabled connections.

|Dependent item|pgsql.connections.sum.disabled

**Preprocessing**

  • JSON Path: `$.disabled`

| |PostgreSQL: Age of oldest xid|

Age of oldest xid.

|Zabbix agent|pgsql.oldest.xid["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Count of autovacuum workers|

Number of autovacuum workers.

|Zabbix agent|pgsql.autovacuum.count["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Cache hit ratio, %|

Cache hit ratio.

|Calculated|pgsql.cache.hit| |PostgreSQL: Uptime|

Time since the server started.

|Zabbix agent|pgsql.uptime["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Replication: Lag in bytes|

Replication lag with master, in bytes.

|Zabbix agent|pgsql.replication.lag.b["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Replication: Lag in seconds|

Replication lag with master, in seconds.

|Zabbix agent|pgsql.replication.lag.sec["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Replication: Recovery role|

Replication role: 1 — recovery is still in progress (standby mode), 0 — master mode.

|Zabbix agent|pgsql.replication.recovery_role["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Replication: Standby count|

Number of standby servers.

|Zabbix agent|pgsql.replication.count["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Replication: Status|

Replication status: 0 — streaming is down, 1 — streaming is up, 2 — master mode.

|Zabbix agent|pgsql.replication.status["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| |PostgreSQL: Ping|

Used to test a connection to see if it is alive. It is set to 0 if the query is unsuccessful.

|Zabbix agent|pgsql.ping["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]

**Preprocessing**

  • Discard unchanged with heartbeat: `1h`

| ### Triggers |Name|Description|Expression|Severity|Dependencies and additional info| |----|-----------|----------|--------|--------------------------------| |Dbstat: Checksum failures detected|

Data page checksum failures were detected on that DB instance:
https://www.postgresql.org/docs/current/checksums.html

|`last(/PostgreSQL by Zabbix agent 2/pgsql.dbstat.sum.checksum_failures.rate)>0`|Average|| |PostgreSQL: Total number of connections is too high|

Total number of current connections exceeds the limit of {$PG.CONN_TOTAL_PCT.MAX.WARN}% out of the maximum number of concurrent connections to the database server (the "max_connections" setting).

|`min(/PostgreSQL by Zabbix agent 2/pgsql.connections.sum.total_pct,5m) > {$PG.CONN_TOTAL_PCT.MAX.WARN}`|Average|| |PostgreSQL: Oldest xid is too big||`last(/PostgreSQL by Zabbix agent 2/pgsql.oldest.xid["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]) > 18000000`|Average|| |PostgreSQL: Service has been restarted|

PostgreSQL uptime is less than 10 minutes.

|`last(/PostgreSQL by Zabbix agent 2/pgsql.uptime["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]) < 10m`|Average|| |PostgreSQL: Service is down|

Last test of a connection was unsuccessful.

|`last(/PostgreSQL by Zabbix agent 2/pgsql.ping["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"])=0`|High|| ### LLD rule Replication discovery |Name|Description|Type|Key and additional info| |----|-----------|----|-----------------------| |Replication discovery|

Discovers replication lag metrics.

|Zabbix agent|pgsql.replication.process.discovery["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| ### Item prototypes for Replication discovery |Name|Description|Type|Key and additional info| |----|-----------|----|-----------------------| |Application [{#APPLICATION_NAME}]: Get replication|

Collect metrics from the "pg_stat_replication" about the application "{#APPLICATION_NAME}" that is connected to this WAL sender, which contains information about the WAL sender process, showing statistics about replication to that sender's connected standby server.

|Dependent item|pgsql.replication.get_metrics["{#APPLICATION_NAME}"]

**Preprocessing**

  • JSON Path: `$['{#APPLICATION_NAME}']`

    ⛔️Custom on fail: Discard value

| |Application [{#APPLICATION_NAME}]: Replication flush lag||Dependent item|pgsql.replication.process.flush_lag["{#APPLICATION_NAME}"]

**Preprocessing**

  • JSON Path: `$.flush_lag`

| |Application [{#APPLICATION_NAME}]: Replication replay lag||Dependent item|pgsql.replication.process.replay_lag["{#APPLICATION_NAME}"]

**Preprocessing**

  • JSON Path: `$.replay_lag`

| |Application [{#APPLICATION_NAME}]: Replication write lag||Dependent item|pgsql.replication.process.write_lag["{#APPLICATION_NAME}"]

**Preprocessing**

  • JSON Path: `$.write_lag`

| ### LLD rule Database discovery |Name|Description|Type|Key and additional info| |----|-----------|----|-----------------------| |Database discovery|

Discovers databases (DB) in the database management system (DBMS), except:

- templates;

- DBs that do not allow connections.

|Zabbix agent|pgsql.db.discovery["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}"]| ### Item prototypes for Database discovery |Name|Description|Type|Key and additional info| |----|-----------|----|-----------------------| |DB [{#DBNAME}]: Get dbstat|

Get dbstat metrics for database "{#DBNAME}".

|Dependent item|pgsql.dbstat.get_metrics["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$['{#DBNAME}']`

    ⛔️Custom on fail: Discard value

| |DB [{#DBNAME}]: Get locks|

Get locks metrics for database "{#DBNAME}".

|Dependent item|pgsql.locks.get_metrics["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$['{#DBNAME}']`

    ⛔️Custom on fail: Discard value

| |DB [{#DBNAME}]: Get queries|

Get queries metrics for database "{#DBNAME}".

|Dependent item|pgsql.queries.get_metrics["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$['{#DBNAME}']`

    ⛔️Custom on fail: Discard value

| |DB [{#DBNAME}]: Database age|

Database age.

|Zabbix agent|pgsql.db.age["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"]| |DB [{#DBNAME}]: Bloating tables|

Number of bloating tables.

|Zabbix agent|pgsql.db.bloating_tables["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"]| |DB [{#DBNAME}]: Database size|

Database size.

|Zabbix agent|pgsql.db.size["{$PG.CONNSTRING}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"]| |DB [{#DBNAME}]: Blocks hit per second|

Total number of times per second disk blocks were found already in the buffer cache, so that a read was not necessary.

|Dependent item|pgsql.dbstat.blks_hit.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.blks_hit`

  • Change per second
| |DB [{#DBNAME}]: Disk blocks read per second|

Total number of disk blocks read per second in this database.

|Dependent item|pgsql.dbstat.blks_read.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.blks_read`

  • Change per second
| |DB [{#DBNAME}]: Detected conflicts per second|

Total number of queries canceled due to conflicts with recovery in this database per second.

|Dependent item|pgsql.dbstat.conflicts.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.conflicts`

  • Change per second
| |DB [{#DBNAME}]: Detected deadlocks per second|

Total number of detected deadlocks in this database per second.

|Dependent item|pgsql.dbstat.deadlocks.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.deadlocks`

  • Change per second
| |DB [{#DBNAME}]: Temp_bytes written per second|

Total amount of data written to temporary files by queries in this database.

|Dependent item|pgsql.dbstat.temp_bytes.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.temp_bytes`

  • Change per second
| |DB [{#DBNAME}]: Temp_files created per second|

Total number of temporary files created by queries in this database.

|Dependent item|pgsql.dbstat.temp_files.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.temp_files`

  • Change per second
| |DB [{#DBNAME}]: Tuples deleted per second|

Total number of rows deleted by queries in this database per second.

|Dependent item|pgsql.dbstat.tup_deleted.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tup_deleted`

  • Change per second
| |DB [{#DBNAME}]: Tuples fetched per second|

Total number of rows fetched by queries in this database per second.

|Dependent item|pgsql.dbstat.tup_fetched.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tup_fetched`

  • Change per second
| |DB [{#DBNAME}]: Tuples inserted per second|

Total number of rows inserted by queries in this database per second.

|Dependent item|pgsql.dbstat.tup_inserted.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tup_inserted`

  • Change per second
| |DB [{#DBNAME}]: Tuples returned per second|

Number of rows returned by queries in this database per second.

|Dependent item|pgsql.dbstat.tup_returned.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tup_returned`

  • Change per second
| |DB [{#DBNAME}]: Tuples updated per second|

Total number of rows updated by queries in this database per second.

|Dependent item|pgsql.dbstat.tup_updated.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tup_updated`

  • Change per second
| |DB [{#DBNAME}]: Commits per second|

Number of transactions in this database that have been committed per second.

|Dependent item|pgsql.dbstat.xact_commit.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.xact_commit`

  • Change per second
| |DB [{#DBNAME}]: Rollbacks per second|

Total number of transactions in this database that have been rolled back.

|Dependent item|pgsql.dbstat.xact_rollback.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.xact_rollback`

  • Change per second
| |DB [{#DBNAME}]: Backends connected|

Number of backends currently connected to this database.

|Dependent item|pgsql.dbstat.numbackends["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.numbackends`

| |DB [{#DBNAME}]: Checksum failures|

Number of data page checksum failures detected in this database.

|Dependent item|pgsql.dbstat.checksum_failures.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.checksum_failures`

  • Matches regular expression: `^\d*$`

    ⛔️Custom on fail: Set value to: `-2`

  • Change per second

    ⛔️Custom on fail: Set value to: `-1`

| |DB [{#DBNAME}]: Disk blocks read time per second|

Time spent reading data file blocks by backends per second.

|Dependent item|pgsql.dbstat.blk_read_time.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.blk_read_time`

  • Custom multiplier: `0.001`

  • Change per second
| |DB [{#DBNAME}]: Disk blocks write time|

Time spent writing data file blocks by backends per second.

|Dependent item|pgsql.dbstat.blk_write_time.rate["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.blk_write_time`

  • Custom multiplier: `0.001`

  • Change per second
| |DB [{#DBNAME}]: Num of accessexclusive locks|

Number of accessexclusive locks for this database.

|Dependent item|pgsql.locks.accessexclusive["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.accessexclusive`

| |DB [{#DBNAME}]: Num of accessshare locks|

Number of accessshare locks for this database.

|Dependent item|pgsql.locks.accessshare["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.accessshare`

| |DB [{#DBNAME}]: Num of exclusive locks|

Number of exclusive locks for this database.

|Dependent item|pgsql.locks.exclusive["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.exclusive`

| |DB [{#DBNAME}]: Num of rowexclusive locks|

Number of rowexclusive locks for this database.

|Dependent item|pgsql.locks.rowexclusive["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.rowexclusive`

| |DB [{#DBNAME}]: Num of rowshare locks|

Number of rowshare locks for this database.

|Dependent item|pgsql.locks.rowshare["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$['{#DBNAME}'].rowshare`

    ⛔️Custom on fail: Discard value

| |DB [{#DBNAME}]: Num of sharerowexclusive locks|

Number of total sharerowexclusive for this database.

|Dependent item|pgsql.locks.sharerowexclusive["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.sharerowexclusive`

| |DB [{#DBNAME}]: Num of shareupdateexclusive locks|

Number of shareupdateexclusive locks for this database.

|Dependent item|pgsql.locks.shareupdateexclusive["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.shareupdateexclusive`

| |DB [{#DBNAME}]: Num of share locks|

Number of share locks for this database.

|Dependent item|pgsql.locks.share["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.share`

| |DB [{#DBNAME}]: Num of locks total|

Total number of locks in this database.

|Dependent item|pgsql.locks.total["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.total`

| |DB [{#DBNAME}]: Queries max maintenance time|

Max maintenance query time for this database.

|Dependent item|pgsql.queries.mro.time_max["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.mro_time_max`

| |DB [{#DBNAME}]: Queries max query time|

Max query time for this database.

|Dependent item|pgsql.queries.query.time_max["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.query_time_max`

| |DB [{#DBNAME}]: Queries max transaction time|

Max transaction query time for this database.

|Dependent item|pgsql.queries.tx.time_max["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tx_time_max`

| |DB [{#DBNAME}]: Queries slow maintenance count|

Slow maintenance query count for this database.

|Dependent item|pgsql.queries.mro.slow_count["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.mro_slow_count`

| |DB [{#DBNAME}]: Queries slow query count|

Slow query count for this database.

|Dependent item|pgsql.queries.query.slow_count["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.query_slow_count`

| |DB [{#DBNAME}]: Queries slow transaction count|

Slow transaction query count for this database.

|Dependent item|pgsql.queries.tx.slow_count["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tx_slow_count`

| |DB [{#DBNAME}]: Queries sum maintenance time|

Sum maintenance query time for this database.

|Dependent item|pgsql.queries.mro.time_sum["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.mro_time_sum`

| |DB [{#DBNAME}]: Queries sum query time|

Sum query time for this database.

|Dependent item|pgsql.queries.query.time_sum["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.query_time_sum`

| |DB [{#DBNAME}]: Queries sum transaction time|

Sum transaction query time for this database.

|Dependent item|pgsql.queries.tx.time_sum["{#DBNAME}"]

**Preprocessing**

  • JSON Path: `$.tx_time_sum`

| ### Trigger prototypes for Database discovery |Name|Description|Expression|Severity|Dependencies and additional info| |----|-----------|----------|--------|--------------------------------| |DB [{#DBNAME}]: Too many recovery conflicts|

The primary and standby servers are in many ways loosely connected. Actions on the primary will have an effect on the standby. As a result, there is potential for negative interactions or conflicts between them.
https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-CONFLICT

|`min(/PostgreSQL by Zabbix agent 2/pgsql.dbstat.conflicts.rate["{#DBNAME}"],5m) > {$PG.CONFLICTS.MAX.WARN:"{#DBNAME}"}`|Average|| |DB [{#DBNAME}]: Deadlock occurred|

Number of deadlocks detected per second exceeds {$PG.DEADLOCKS.MAX.WARN:"{#DBNAME}"} for 5m.

|`min(/PostgreSQL by Zabbix agent 2/pgsql.dbstat.deadlocks.rate["{#DBNAME}"],5m) > {$PG.DEADLOCKS.MAX.WARN:"{#DBNAME}"}`|High|| |DB [{#DBNAME}]: Checksum failures detected|

Data page checksum failures were detected on that database:
https://www.postgresql.org/docs/current/checksums.html

|`last(/PostgreSQL by Zabbix agent 2/pgsql.dbstat.checksum_failures.rate["{#DBNAME}"])>0`|Average|| |DB [{#DBNAME}]: Too many slow queries|

The number of detected slow queries exceeds the limit of {$PG.SLOW_QUERIES.MAX.WARN:"{#DBNAME}"}.

|`min(/PostgreSQL by Zabbix agent 2/pgsql.queries.query.slow_count["{#DBNAME}"],5m)>{$PG.SLOW_QUERIES.MAX.WARN:"{#DBNAME}"}`|Warning|| ## Feedback Please report any issues with the template at [`https://support.zabbix.com`](https://support.zabbix.com) You can also provide feedback, discuss the template, or ask for help at [`ZABBIX forums`](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback)