Tillered Arctic

Quickstart

Create a cluster using compose apply

Quickstart

This guide walks you through creating your first Arctic cluster using the compose approach. You will define your cluster configuration in a YAML file and apply it with a single command.

Create a Cluster Configuration

Create a file called cluster.yaml with your cluster configuration:

version: v1

license: license.json    # Path to your license file

peers:                   # Define all hosts in the cluster
  - name: node-a
    address: 192.168.1.10:8080
  - name: node-b
    address: 192.168.1.20:8080

services:
  - name: tunnel-a-to-b
    source_peer: node-a  # Where traffic enters
    target_peer: node-b  # Where traffic exits
    transport_type: tcp
    interface:
      enabled: true      # Create a MACVLAN interface for routing

Place your license.json file in the same directory as cluster.yaml.

Apply the Configuration

Apply the configuration to create your cluster:

arctic compose apply ./cluster.yaml

Important

Save the credentials output - these cannot be retrieved later:

export ARCTIC_CLIENT_ID=cli_xxxxxxxxxxxxxxxxxxxxxx
export ARCTIC_CLIENT_SECRET=sec_xxxxxxxxxxxxxxxxxxxxxxx

The CLI stores these in ~/.config/arctic/config.yaml automatically.

Verify the Setup

Check that your cluster is working:

# List peers
arctic peers list

# List services
arctic services list

# Get service details (including interface IP)
arctic services get <service-id>

The <service-id> is a ULID shown in the output of arctic services list.

Updating Your Cluster

Edit cluster.yaml and use these commands to manage changes:

# Validate configuration syntax
arctic compose validate ./cluster.yaml

# Preview changes before applying
arctic compose diff ./cluster.yaml

# Apply changes to the cluster
arctic compose apply ./cluster.yaml

The CLI detects changes and updates only what is necessary.

Service Configuration Options

The quickstart example uses the default MACVLAN interface mode. Here are additional configuration options.

Static IP Assignment

Assign a static IPv4 address to the service interface:

services:
  - name: tunnel-a-to-b
    source_peer: node-a
    target_peer: node-b
    transport_type: tcp
    interface:
      enabled: true
      ipv4: 192.168.3.32/24  # Static IPv4 with subnet

Policy-Based Routing

Route specific subnets through the tunnel instead of using a MACVLAN interface. Use this when MACVLAN is unavailable (e.g., some cloud environments):

services:
  - name: tunnel-a-to-b
    source_peer: node-a
    target_peer: node-b
    transport_type: tcp
    routes:
      - dest_cidr: 172.31.8.0/24
        priority: 100

Troubleshooting

Agent Not Starting

Check logs for errors:

journalctl -u arctic-agent -f

Common causes: missing dependencies, port 8080 in use, not running as root.

Compose Apply Fails

  • Verify all agents are running: curl http://<ip>:8080/livez
  • Check license file exists and is valid
  • Ensure network connectivity between agents

Handshake Fails

  • Both agents must use the same license
  • Agents must reach each other on port 8080
  • Check for firewall rules blocking ports 8080, 51840/UDP, and 61000

Next Steps

You now have a working Arctic cluster with encrypted tunnels between your hosts. Proceed to Next Steps for guidance on where to go from here.