By Praveen Yedlapalli
We recently conducted a new performance study that focuses on running a PostgreSQL database in a vSphere 7.0 U2 VM with Intel Optane DC persistent memory (PMem). This blog presents the highlights from that study. This blog can be viewed as an extension of the Persistent Memory Performance white paper we published in 2019. Please refer to the original white paper for additional details and best practices.
The PostgreSQL database is widely used in a variety of environments including with VMware vCenter Server appliance. Because this database is so pervasive, VMware is always improving its performance on vSphere by using new and existing technologies. One such technology is persistent memory, which is gaining traction in the datacenter. Our previous technical papers (Intel Optane DC Persistent Memory “Memory Mode” Virtualized Performance Study and Persistent Memory Performance in vSphere 6.7) showed how we can take advantage of persistent memory to improve the performance of different workloads like those of MySQL, Oracle Database, SQL Server, and Redis. This blog covers the PostgreSQL database performance improvements with PMem.
For this performance study, we focus on the AppDirect mode of persistent memory where PMem is presented to the VMs as a very fast storage device (NVDIMM). Details about the configuration and workload are given in the tables below. Please note that the PostgreSQL database is not modified in any way.
CPU | 2 sockets / 48 cores / 96 threads
Intel Xeon Platinum 8260L CPU @ 2.40GHz (“Cascade Lake”) |
Memory | 2x 384 GB = 768 GB DRAM |
Persistent memory | 2x 1.5 TB = 3 TB (Intel Optane 256 GB NVDIMMs) |
Storage devices | HPE MO001600 NVMe SSD
Samsung PM1725a NVMe SSD |
Table 1. Host configuration
vCPU | 48 vCPU |
vRAM | 200 GB |
NVDIMMs | 1 TB NVDIMM for DB
200 GB NVDIMM for write-ahead logging (WAL) |
Storage | 1 TB on MO001600
200 GB on PM1725a |
Guest operating system | Photon 3.0 rev 3 (4.19.160-4) |
Table 2. VM configuration
PostgreSQL version | 13 |
PostgreSQL configuration changes | huge_pages = on
shared_buffers = 80 GB (default 128 MB) work_mem = 1 GB max_wal_size = 50 GB (default 1 GB) min_wal_size = 1 GB (default 80 MB) |
pgbench fill DB | pgbench -i -s 50000 → 742 GB DB |
pgbench test run | pgbench -P 10 -c 72 -j 24 -T 600 |
Table 3. Workload configuration
We tried three different configurations
- 2x NVME SSDs (baseline): Both WAL and database partitions were on two separate, fast NVME SSDs
- WAL on PMEM: WAL partition was placed on 200 GB PMem NVDIMM
- Database and WAL on PMem: Both database and WAL partitions were on PMem NVDIMMs
Figure 1 shows the throughput and latency of a workload like TPC-B observed from pgbench (running in the same VM). The workload provides a database stress test, characterized by:
- Significant disk input/output
- Moderate system and application execution time
- Transaction integrity
The workload measures throughput in terms of how many transactions per second a system can perform.
By moving both the database and WAL to PMem, we achieved 44.6% improvement in throughput and 30.8% reduction in transaction latency.
Figure 1. TPC-B workload throughput improved by 44.6% by using PMEM
Figure 2 shows the throughput and latency for read-only transactions. We observed up to 3.4x improvement in throughput. These improvements are a direct benefit of the low latency, high throughput PMEM device backing the database.
Figure 2. Read-only throughput improved by approximately 3.4x using PMem
We also tried a VM with only 1 GB memory (instead of 200 GB) and observed minimal impact on PostgreSQL performance running on PMem, while it had a significant impact on the NVMESSD use case. Accordingly, we observed a read-only throughput gain of approximately 4.5x on PMem compared to NVMESSDs.
Results summary:
- Just moving the log partition to PMem and leaving the database on NVMESSD improved throughput by 13.4% and reduced latency by 11.8%.
- Moving both database and WAL log partition to PMem improved throughput by 44.6% and latency by 30.8%.
- Moving both database and WAL log partitions to PMEM improved read-only throughput by approximately 3.4x.
To conclude, PMem on vSphere can significantly improve the performance of a PostgreSQL database. This can have a direct impact on the performance of millions of applications that employ PostgreSQL.