Tillered Arctic

Architecture

Understanding Arctic's system architecture and components

Architecture

This article explains Arctic's system architecture, the components involved, and how they work together.

System Overview

Arctic is a distributed network routing system consisting of three main components:

Arctic Agent

The Arctic agent is the central orchestrator. It:

  • Manages cluster state and configuration
  • Exposes the HTTP API for management
  • Coordinates network components automatically
  • Participates in state synchronization with other peers

Pegasus (TProxy Service)

Pegasus handles TCP traffic using Linux TProxy (transparent proxy):

  • Intercepts TCP packets and proxies them to remote peers
  • Preserves original source addresses when Transparent Mode is enabled
  • Supports QoS with bandwidth limiting

Tempest (IP Tunnel Service)

Tempest handles non-TCP traffic using encrypted tunnels:

  • Creates encrypted tunnels between peers
  • Routes UDP, ICMP, and other IP protocols
  • Supports multiple simultaneous peer connections

Network Integration

Arctic integrates with the Linux network stack to route traffic:

MACVLAN Interfaces

When services require dedicated interfaces, Arctic creates MACVLAN interfaces:

  • Provides isolated network identity for service traffic
  • Enables traffic matching based on interface
  • Allows binding applications to specific service addresses

Traffic Routing

Arctic routes traffic based on:

  • Source and destination IP ranges (CIDR)
  • Interface bindings
  • Protocol type (TCP vs non-TCP)

High Availability

Arctic does not have a single point of failure:

  • Each agent operates independently
  • State is replicated across peers
  • Any peer can handle API requests
  • Services continue if individual peers fail

For production deployments, consider:

  • Deploying 3+ agents for redundancy
  • Using a load balancer for API access
  • Monitoring agent health with alerting

Resource Usage

Typical resource consumption:

ComponentCPUMemory
Arctic AgentLow~100 MB
PegasusMedium~50 MB
TempestLow~30 MB

Resource usage scales with:

  • Number of active services
  • Traffic volume through proxies
  • Number of peers in the cluster

See Also