Skip to main content

Networking

This page describes the network requirements for the OpenLM Platform across all deployment paths.

Conceptual overview

The OpenLM Platform exposes a single HTTPS (port 443) ingress endpoint. This is the only port that needs to be accessible from outside the cluster network. All communication from field agents and users accessing the web UI flows through this single endpoint.

HTTP to HTTPS redirect

Port 80 (HTTP) is also typically exposed on the ingress controller to redirect HTTP requests to HTTPS. No application traffic is served over HTTP.

The Kubernetes cluster, its nodes, infrastructure services, and internal networking can reside on a fully enclosed private network. OpenLM does not require any inbound access to the cluster internals from the outside. There is no remote management and no outbound communication to external systems. The only external exposure is the HTTPS ingress configured during deployment.

Inbound traffic

All inbound traffic enters through a single HTTPS (443) ingress endpoint:

SourceProtocolPortPurpose
BrokerHTTPS443License server event collection. Installed on machines running license servers.
Workstation AgentHTTPS443Workstation usage event collection. Installed on end-user workstations.
Directory Sync Agent (DSA)HTTPS443Directory scanning and synchronization events. Installed on a machine with directory access.
Web UIHTTPS443Users accessing platform administration, dashboards, and reporting.

All agents and users connect to the same fully qualified domain name (FQDN) configured during deployment. No direct access to Kubernetes nodes, API server, databases, or any internal service is required from outside the cluster network.

Outbound traffic

The Kubernetes cluster requires outbound access only for pulling container images:

DestinationProtocolPortPurpose
public.ecr.aws/r3q3q2f4HTTPS443OpenLM container image registry

No other outbound connectivity is required from the cluster at runtime. The platform does not communicate with any external OpenLM servers – it runs entirely within your network. If your environment uses an HTTP proxy or firewall allowlist, add the registry endpoint above.

External integrations

The platform can also integrate with external services such as ServiceNow. If such integrations are configured, outbound access to those endpoints will be required as well. Add any integration endpoints to the firewall allowlist as needed.

tip

For air-gapped environments, container images can be pre-loaded into a private registry. Contact OpenLM for guidance on offline image distribution.

TLS certificates

The TLS certificate must be trusted by every machine that connects to the platform, including agent machines, user browsers, and services inside the cluster that call back to the FQDN. This is critical for successful HTTPS connections.

See TLS certificates for full requirements, configuration steps, and a troubleshooting guide.

On-premise machines

In addition to the platform-level requirements above, on-premise Kubernetes clusters require inter-node communication and management access.

Inter-node ports (all nodes)

PortProtocolPurpose
6443TCPKubernetes API server
8472UDPFlannel VXLAN overlay network
10250TCPKubelet metrics
51820UDPFlannel WireGuard (if enabled)

Role-specific ports

Node rolePortProtocolPurpose
Control plane80TCPHTTP ingress (redirect to HTTPS)
Control plane443TCPHTTPS ingress (agents, UI, API)
Infrastructure30432TCPPostgreSQL NodePort for Power BI / external reporting tools

Network topology

  • All nodes must reside on the same L2 network or have full inter-node connectivity with no port restrictions between them.
  • The control plane node is the ingress entry point. DNS should resolve to this node's IP or a load balancer in front of it.
  • Administration access (SSH, port 22) is required on all nodes for management.

Outbound (provisioning)

During initial setup, nodes also need access to:

DestinationPurpose
public.ecr.aws/r3q3q2f4OpenLM container images
Kubernetes distribution sourceKubernetes installation (varies by distribution)
AlmaLinux / RHEL mirrorsOS package updates (dnf)

After installation, only the container registry endpoint is required.

Private cloud (AWS)

On AWS, networking is handled through a VPC with public and private subnets.

VPC layout

ComponentDetail
VPC CIDR/22 block (for example, 10.0.0.0/22)
Private subnets3 (one per availability zone) – EKS nodes and managed services
Public subnets3 – load balancers and NAT gateway
NAT gateway1, for outbound internet from private subnets
S3 endpointGateway endpoint for S3 access without NAT

Security groups

ServiceAllowed sourcePortProtocol
EKS API serverConfigured CIDRs (eks_public_access_cidrs)443TCP
RDS SQL ServerEKS cluster security group1433TCP
MSK (Kafka)EKS cluster security group9096TCP
ElastiCache (Redis)EKS cluster security group6379TCP

Ingress

External traffic (agents and users) reaches the cluster through an AWS load balancer in the public subnets, forwarding to the ingress controller in the private subnets on ports 443 and 80.

Outbound

  • Workload nodes in private subnets reach the internet through the NAT gateway.
  • Container images are pulled from public.ecr.aws/r3q3q2f4 via NAT.

Private cloud (Azure)

On Azure, networking uses a VNet with Azure CNI mode for AKS.

VNet layout

ComponentDetail
VNet CIDR/22 block (for example, 10.0.0.0/22) providing 1,024 IP addresses
Networking modeAzure CNI (each pod gets a VNet IP)
IP capacity~400 IPs for pods, nodes, and growth

Network sizing

With Azure CNI, every pod consumes an IP address from the VNet:

  • ~29 pod IPs per node
  • 6–7 nodes provide ~174–203 pod IPs
  • ~143 running pods plus node IPs fit within a /22 block with room for growth

Ingress

External traffic enters through an Azure load balancer or Application Gateway on ports 443 and 80, forwarding to the AKS ingress controller.

Managed service connectivity

Azure SQL Managed Instance and Azure Cache for Redis are accessed over private endpoints or VNet integration. No public internet exposure is required for managed services.

Outbound

  • AKS nodes pull container images from public.ecr.aws/r3q3q2f4.
  • Configure the AKS outbound type and firewall rules to allow access to the registry endpoint.