YMatrix
Quick Start
Connecting
Benchmarks
Deployment
Data Usage
Manage Clusters
Upgrade
Global Maintenance
Expansion
Monitoring
Security
Best Practice
Technical Principles
Data Type
Storage Engine
Execution Engine
Streaming Engine(Domino)
MARS3 Index
Extension
Advanced Features
Advanced Query
Resource Groups
Federal Query
Grafana
Backup and Restore
Disaster Recovery
Guide
Performance Tuning
Troubleshooting
Tools
Configuration Parameters
SQL Reference
The Interconnect (IC) is the core facility for data exchange among segment nodes during distributed query execution in YMatrix. YMatrix supports three IC types.
ic-tcp: Implemented over TCP.ic-udpifc: Implemented over UDP, providing TCP-like reliability.ic-tunnel: Introduced in YMatrix 6.7.1. Designed to overcome limitations of traditional ICs, delivering improved environment compatibility and performance.| IC Type | Key Characteristics | Typical Use Cases | Configuration Method | Supported Versions |
|---|---|---|---|---|
ic-tcp |
- Highest theoretical performance (TCP-based);- Susceptible to query hangs;- Limited by system TCP port count; weak scalability in large clusters or highly parallel workloads. | - Small clusters with good network conditions and few nodes;- Latency-sensitive, peak-performance scenarios. | - Set GUC gp_interconnect_type=tcp.- Configure globally via gpconfig.- Set per-session using PGOPTIONS="-c gp_interconnect_type=tcp" psql. |
All versions |
ic-udpifc |
- Sensitive to network conditions: high latency or small MTU degrades performance significantly. | - OLAP queries;- Low-latency networks with large MTU;- General workloads with large data volumes — preferred unless data migration (very high volume) is involved, where ic-udpifc performance drops sharply. |
- Set GUC gp_interconnect_type=udpifc.- Configure globally via gpconfig.- Set per-session using PGOPTIONS="-c gp_interconnect_type=udpifc" psql. |
All versions |
ic-tunnel |
- Newly designed IC type focused on maximum environment compatibility;- No manual configuration required;- Automatically detects cluster topology and adapts seamlessly to scale-out/in or master-standby failover. | - Large-scale clusters where ic-tcp may hang or stall due to node count;- Data migration, especially when ic-udpifc performance degrades sharply;- Detail-oriented queries with many columns, where vectorized motion underperforms;- Environments with poor network conditions or connection rate limiting. |
- Set GUC gp_interconnect_type=tunnel.- Configure globally via gpconfig.- Set per-session using PGOPTIONS="-c gp_interconnect_type=tunnel" psql. |
6.7.1 and later |
Note!
ic-tunneldelivers strong performance across most scenarios. However, for the following specific cases, consider evaluating traditionalic-tcporic-udpifcto achieve peak performance:
- Single-node deployments
- OLTP point-query workloads
ic-tunnelic-tunnel supports the following GUCs:
mx_interconnect_compress: Controls whether data compression is enabled.
on, compression is activated within ic-tunnel, significantly reducing inter-node network traffic at the cost of increased QE CPU usage. Recommended for high-volume data transfer (e.g., data migration) or bandwidth-constrained environments.matrix.ic_tunnel_port_delta: Controls the ic-tunnel listening port offset. Default is 200.
ic-tunnel server port is computed as postmaster-port + delta. Users must ensure that the resulting port is unique and unoccupied.ic-tunnel uses a proxy model. Each segment postmaster hosts an ic-tunnel server process. All QE-to-QE network communication is routed through this server, and only one persistent TCP connection is required between any two segments.
You can list ic-tunnel server processes using ps:
$ ps -ef | grep ic-tunnel
u 2769149 2769130 0 03:33 ? 00:00:03 postgres: 4004, ic-tunnel server
u 2769150 2769129 0 03:33 ? 00:00:03 postgres: 4003, ic-tunnel server
u 2769164 2769128 0 03:33 ? 00:00:03 postgres: 4002, ic-tunnel server
u 2769195 2769170 0 03:33 ? 00:00:02 postgres: 4000, ic-tunnel server
When hot_standby = on, ic-tunnel server processes also run on standby and mirror nodes.
Each ic-tunnel server process requires a dedicated TCP listening port. On the same host, all such ports must be distinct. The port is computed as:
ic-tunnel-server-port := postmaster-port + delta
The port is derived automatically by adding a fixed offset (delta) to the postmaster’s listening port. No manual configuration is needed — however, users must ensure the computed port is both unique and unoccupied.
The offset is controlled by the GUC matrix.ic_tunnel_port_delta, which accepts any positive or negative integer. Its default value is 200.
ic-tunnel uses a new hot-pluggable architecture built upon YMatrix’s newly introduced IC plugin framework. It is provided by the shared library matrixts.so. Therefore:
ic-tunnel, the GUC shared_preload_libraries must include matrixts. matrixts extension is not required.Note!
Ifgp_interconnect_typeis set totunnelbutshared_preload_librariesdoes not includematrixts, the cluster still starts successfully and client sessions can connect. In this case, the IC plugin automatically falls back toic-tcp, and a warning such asWARNING: ic: unknown interconnect type "tunnel", fallback to "tcp" temporarilyappears in the database log. This fallback ensures cluster usability under misconfiguration, but administrators should correct the configuration promptly.