Overview

The Interconnect (IC) is the core facility for data exchange among segment nodes during distributed query execution in YMatrix. YMatrix supports three IC types.

Types

  • Traditional IC types:
    • ic-tcp: Implemented over TCP.
    • ic-udpifc: Implemented over UDP, providing TCP-like reliability.
  • New IC type:
    • ic-tunnel: Introduced in YMatrix 6.7.1. Designed to overcome limitations of traditional ICs, delivering improved environment compatibility and performance.

Detailed Comparison

IC Type Key Characteristics Typical Use Cases Configuration Method Supported Versions
ic-tcp - Highest theoretical performance (TCP-based);- Susceptible to query hangs;- Limited by system TCP port count; weak scalability in large clusters or highly parallel workloads. - Small clusters with good network conditions and few nodes;- Latency-sensitive, peak-performance scenarios. - Set GUC gp_interconnect_type=tcp.- Configure globally via gpconfig.- Set per-session using PGOPTIONS="-c gp_interconnect_type=tcp" psql. All versions
ic-udpifc - Sensitive to network conditions: high latency or small MTU degrades performance significantly. - OLAP queries;- Low-latency networks with large MTU;- General workloads with large data volumes — preferred unless data migration (very high volume) is involved, where ic-udpifc performance drops sharply. - Set GUC gp_interconnect_type=udpifc.- Configure globally via gpconfig.- Set per-session using PGOPTIONS="-c gp_interconnect_type=udpifc" psql. All versions
ic-tunnel - Newly designed IC type focused on maximum environment compatibility;- No manual configuration required;- Automatically detects cluster topology and adapts seamlessly to scale-out/in or master-standby failover. - Large-scale clusters where ic-tcp may hang or stall due to node count;- Data migration, especially when ic-udpifc performance degrades sharply;- Detail-oriented queries with many columns, where vectorized motion underperforms;- Environments with poor network conditions or connection rate limiting. - Set GUC gp_interconnect_type=tunnel.- Configure globally via gpconfig.- Set per-session using PGOPTIONS="-c gp_interconnect_type=tunnel" psql. 6.7.1 and later

Note!
ic-tunnel delivers strong performance across most scenarios. However, for the following specific cases, consider evaluating traditional ic-tcp or ic-udpifc to achieve peak performance:

  • Single-node deployments
  • OLTP point-query workloads

Using ic-tunnel

Control Parameters

ic-tunnel supports the following GUCs:

  • mx_interconnect_compress: Controls whether data compression is enabled.

    • When set to on, compression is activated within ic-tunnel, significantly reducing inter-node network traffic at the cost of increased QE CPU usage. Recommended for high-volume data transfer (e.g., data migration) or bandwidth-constrained environments.
  • matrix.ic_tunnel_port_delta: Controls the ic-tunnel listening port offset. Default is 200.

    • The ic-tunnel server port is computed as postmaster-port + delta. Users must ensure that the resulting port is unique and unoccupied.

Architecture Overview

Server Process

ic-tunnel uses a proxy model. Each segment postmaster hosts an ic-tunnel server process. All QE-to-QE network communication is routed through this server, and only one persistent TCP connection is required between any two segments.

You can list ic-tunnel server processes using ps:

$ ps -ef | grep ic-tunnel
u        2769149 2769130  0 03:33 ?        00:00:03 postgres:  4004, ic-tunnel server
u        2769150 2769129  0 03:33 ?        00:00:03 postgres:  4003, ic-tunnel server
u        2769164 2769128  0 03:33 ?        00:00:03 postgres:  4002, ic-tunnel server
u        2769195 2769170  0 03:33 ?        00:00:02 postgres:  4000, ic-tunnel server

When hot_standby = on, ic-tunnel server processes also run on standby and mirror nodes.

Listening Port

Each ic-tunnel server process requires a dedicated TCP listening port. On the same host, all such ports must be distinct. The port is computed as:

ic-tunnel-server-port := postmaster-port + delta

The port is derived automatically by adding a fixed offset (delta) to the postmaster’s listening port. No manual configuration is needed — however, users must ensure the computed port is both unique and unoccupied.

The offset is controlled by the GUC matrix.ic_tunnel_port_delta, which accepts any positive or negative integer. Its default value is 200.

Dynamic Loading

ic-tunnel uses a new hot-pluggable architecture built upon YMatrix’s newly introduced IC plugin framework. It is provided by the shared library matrixts.so. Therefore:

  • To use ic-tunnel, the GUC shared_preload_libraries must include matrixts.
  • Creating the matrixts extension is not required.

Note!
If gp_interconnect_type is set to tunnel but shared_preload_libraries does not include matrixts, the cluster still starts successfully and client sessions can connect. In this case, the IC plugin automatically falls back to ic-tcp, and a warning such as WARNING: ic: unknown interconnect type "tunnel", fallback to "tcp" temporarily appears in the database log. This fallback ensures cluster usability under misconfiguration, but administrators should correct the configuration promptly.