Logging

Your database cluster generates logs through regular activity. Reviewing logs can be helpful for debugging, auditing, and improving performance. Crunchy Bridge will show a cluster's live logs in the dashboard or through the CLI. Logs can also be exported in syslog format.

Logging configurations

There are several Postgres logging configurations to review for your specific needs. The parameters outlined below can be set for a cluster using configuration parameters.

Log volume

You can adjust how much Postgres will capture in the logs by setting the log_statement parameter. The values you can choose from are:

  • none - logging nothing
  • ddl - data definition changes
  • mod - modification to data including all DDL plus inserts, updates, and deletes
  • all - everything (generally not recommended)

Query length

If you would like to capture information about queries that run longer than some number of milliseconds, you can configure that using the log_min_duration_statement parameter. This is helpful in particular for debugging long-running queries.

Lock wait

If you want to investigate queries that wait for a lock longer than the deadlock_timeout setting, which defaults to 1 second, you can set log_lock_waits and those will be captured in the logs.

Check out the Postgres docs if you'd like to learn more about configuring logging.

Live logs

A live stream of a cluster's logs is available in the dashboard. Live logs uses the Postgres logging settings currently configured for your cluster.

Live logs in the CLI

A live stream of a cluster's logs can be viewed in the CLI using cb logs.

Exporting logs

Beyond live logs, Crunchy Bridge exposes logs in syslog format to be exported to any logging system or provider. See export logs for more information.

Audit logs

Crunchy Bridge by default has pgAudit running which generates audit logs. See auditing for more details.