Deploy Asset Tracker
on your infrastructure
Complete guide to install, configure, and maintain Asset Tracker on your own servers. Docker, Vercel, or bare-metal.
Overview
Asset Tracker is an open-source IT asset management platform built with Next.js, PostgreSQL, and Prisma. It supports multi-tenancy, role-based access control, SSO, audit logging, and dozens of integrations.
Self-contained
Single Docker image with built-in PostgreSQL option. No external services required.
Secure by default
Encryption at rest, rate limiting, account lockout, MFA, and full audit trail out of the box.
Lightweight
Runs on a 1 vCPU / 1 GB VPS. Scales to thousands of assets without breaking a sweat.
System Requirements
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2 vCPU |
| RAM | 1 GB | 2 GB |
| Disk | 10 GB | 20 GB+ (depends on attachments) |
| OS | Any Linux with Docker | Ubuntu 22.04+ / Debian 12+ |
| PostgreSQL | 15 | 16 |
| Node.js | 20 (bare-metal only) | 20 LTS |
Quick Start
Get up and running in under 5 minutes with Docker Compose.
Clone the repository
git clone https://github.com/luca-fitseveneleven/assetTracker.git
cd assetTracker
Configure environment
cp .env.example .env
Edit .env and set the required values:
DATABASE_URL=postgresql://assettracker:CHANGE_ME@db:5432/assettracker
POSTGRES_USER=assettracker
POSTGRES_PASSWORD=CHANGE_ME
POSTGRES_DB=assettracker
BETTER_AUTH_URL=https://assets.yourdomain.com
BETTER_AUTH_SECRET= # openssl rand -base64 32
ENCRYPTION_KEY= # openssl rand -hex 32
CRON_SECRET= # openssl rand -hex 16
SELF_HOSTED=true
Start everything
docker compose --profile with-db up -d --build
Initialize the database and create an admin
docker compose exec app npx prisma migrate deploy
docker compose exec app node scripts/create-admin.mjs
Open the app
Navigate to http://your-server-ip:3000 and sign in with your admin credentials.
Docker Compose
Recommended for most self-hosted deployments. Includes PostgreSQL and the app in a single stack.
With bundled database
docker compose --profile with-db up -d --build
With external database
If you're using Supabase, Neon, or a managed PostgreSQL instance:
# Set DATABASE_URL in .env to your external connection string, then:
docker compose --profile app-only up -d --build
Initialize
# Apply migrations
docker compose exec app npx prisma migrate deploy
# Create admin user (interactive prompt)
docker compose exec app node scripts/create-admin.mjs
Verify
# Check status
docker compose ps
# View logs
docker compose logs app --tail 20
# Test HTTP
curl -I http://localhost:3000
Vercel
Zero-config deployment with built-in cron scheduling and global CDN.
Import to Vercel
Go to vercel.com/new, import from GitHub, and set environment variables:
| Variable | Value |
|---|---|
DATABASE_URL | Connection string from step 1 |
BETTER_AUTH_URL | https://your-domain.com |
BETTER_AUTH_SECRET | Generate with openssl rand -base64 32 |
ENCRYPTION_KEY | Generate with openssl rand -hex 32 |
CRON_SECRET | Generate with openssl rand -hex 16 |
SELF_HOSTED | true |
Run migrations
From your local machine with the production DATABASE_URL:
DATABASE_URL="postgresql://..." npx prisma migrate deploy
DATABASE_URL="postgresql://..." node scripts/create-admin.mjs
npx prisma migrate deploy && next buildBare Metal / PM2
For servers without Docker. Requires Node.js 20 and PostgreSQL installed directly.
# Install Node.js 20
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Install PM2
npm install -g pm2
# Install PostgreSQL 16
sudo apt install -y postgresql-16
Set up the database
sudo -u postgres psql -c "
CREATE USER assettracker WITH PASSWORD 'STRONG_PASSWORD';
CREATE DATABASE assettracker OWNER assettracker;
GRANT ALL PRIVILEGES ON DATABASE assettracker TO assettracker;
"
Build and run
git clone https://github.com/luca-fitseveneleven/assetTracker.git
cd assetTracker
cp .env.example .env
# Edit .env with your values
npm ci
npx prisma generate
npx prisma migrate deploy
npm run build
node scripts/create-admin.mjs
# Start with PM2
pm2 start npm --name "assettracker" -- start
pm2 save
pm2 startup # follow printed command to enable on boot
Environment Variables
Required
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string |
BETTER_AUTH_SECRET | Signs auth tokens — openssl rand -base64 32 |
BETTER_AUTH_URL | Public URL of the app (e.g. https://assets.example.com) |
ENCRYPTION_KEY | AES-256-GCM key (64 hex chars) — openssl rand -hex 32 |
Recommended
| Variable | Description | Default |
|---|---|---|
SELF_HOSTED | Disables landing page, pricing, registration | false |
CRON_SECRET | Required in production. Protects all 7 cron endpoints. Min 16 chars. | — |
NODE_ENV | Runtime environment | production |
Feature Flags
All enabled by default unless noted. Set to false to disable.
| Variable | Default | Description |
|---|---|---|
FEATURE_RATE_LIMITING | true | API rate limiting |
FEATURE_ACCOUNT_LOCKOUT | true | Lock after 5 failed logins (15 min) |
FEATURE_SESSION_TIMEOUT | true | 30 min inactivity timeout |
FEATURE_AUDIT_LOGGING | true | Security event logging |
FEATURE_EMAIL_NOTIFICATIONS | false | Requires email provider |
MAINTENANCE_MODE | false | Show maintenance page |
Reverse Proxy & SSL
Caddy (recommended — automatic HTTPS)
sudo apt install -y caddy
Edit /etc/caddy/Caddyfile:
assets.yourdomain.com {
reverse_proxy localhost:3000
}
sudo systemctl restart caddy
Caddy automatically provisions and renews Let's Encrypt certificates.
Nginx (alternative)
sudo apt install -y nginx certbot python3-certbot-nginx
Create /etc/nginx/sites-available/assettracker:
server {
server_name assets.yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300s;
client_max_body_size 50m;
}
}
sudo ln -s /etc/nginx/sites-available/assettracker /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl restart nginx
sudo certbot --nginx -d assets.yourdomain.com
Microsoft Entra ID (SSO)
Register an app in Azure
Go to Azure Portal → Microsoft Entra ID → App registrations → New registration
- Name: Asset Tracker
- Supported account types: Single tenant (internal) or multi-tenant
- Redirect URI: Web —
{BETTER_AUTH_URL}/api/auth/oauth2/callback/microsoft
Create a client secret
Go to Certificates & secrets → New client secret. Copy the Value (not the Secret ID).
Set environment variables
MICROSOFT_CLIENT_ID=your-application-client-id
MICROSOFT_CLIENT_SECRET=your-client-secret-value
MICROSOFT_TENANT_ID=your-directory-tenant-id
The "Sign in with Microsoft" button appears automatically on the login page.
LDAP / Active Directory
Asset Tracker supports LDAP authentication for on-premise Active Directory environments. Configure via Admin Settings → Authentication, or set environment variables:
LDAP_URL=ldap://your-dc.example.com:389
LDAP_BIND_DN=cn=service-account,dc=example,dc=com
LDAP_BIND_PASSWORD=service-account-password
LDAP_SEARCH_BASE=dc=example,dc=com
LDAP_SEARCH_FILTER=(sAMAccountName={{username}})
Users authenticating via LDAP are automatically created in Asset Tracker on first login.
Required for password reset emails, magic links, and notifications. Configure via environment variables or Admin Settings → Email.
| Provider | Free Tier | Setup |
|---|---|---|
| Brevo | 300/day | EMAIL_PROVIDER=brevo + BREVO_API_KEY |
| SendGrid | 100/day | EMAIL_PROVIDER=sendgrid + SENDGRID_API_KEY |
| Postmark | 100/month | EMAIL_PROVIDER=postmark + POSTMARK_API_KEY |
| Amazon SES | 62k/month (with EC2) | EMAIL_PROVIDER=ses + AWS credentials |
| Mailgun | Pay as you go | EMAIL_PROVIDER=mailgun + MAILGUN_API_KEY |
EMAIL_PROVIDER=brevo
BREVO_API_KEY=xkeysib-...
EMAIL_FROM=noreply@yourdomain.com
EMAIL_FROM_NAME=Asset Tracker
Test from Admin Settings → Email → Send Test Email after configuration.
File Storage
Asset attachments, photos, and thumbnails. Default is local filesystem.
Local (default)
Files are stored in ./uploads/. For Docker, the compose file mounts a persistent volume automatically.
S3-compatible
Works with AWS S3, Cloudflare R2, MinIO, or Backblaze B2.
STORAGE_PROVIDER=s3
STORAGE_BUCKET=assettracker-files
STORAGE_REGION=eu-central-1
STORAGE_ACCESS_KEY=AKIA...
STORAGE_SECRET_KEY=...
# For non-AWS S3-compatible services:
# STORAGE_ENDPOINT=https://s3.example.com
Azure Blob Storage
STORAGE_PROVIDER=azure
STORAGE_AZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...
STORAGE_AZURE_CONTAINER=attachments
Cron Jobs
Seven automated jobs handle maintenance tasks. On Vercel, they run automatically via vercel.json. On VPS/bare-metal, set up system cron:
crontab -e
# Session cleanup — daily at 1 AM
0 1 * * * curl -sf -H "Authorization: Bearer $CRON_SECRET" https://assets.yourdomain.com/api/cron/sessions
# Email notifications — daily at 2 AM
0 2 * * * curl -sf -H "Authorization: Bearer $CRON_SECRET" https://assets.yourdomain.com/api/cron/notifications
# Workflow automation — daily at 3 AM
0 3 * * * curl -sf -H "Authorization: Bearer $CRON_SECRET" https://assets.yourdomain.com/api/cron/workflows
# GDPR retention — daily at 4 AM
0 4 * * * curl -sf -H "Authorization: Bearer $CRON_SECRET" https://assets.yourdomain.com/api/cron/gdpr-retention
# LDAP sync — daily at 5 AM
0 5 * * * curl -sf -H "Authorization: Bearer $CRON_SECRET" https://assets.yourdomain.com/api/cron/ldap-sync
# Cache & rate limit cleanup — daily at 6 AM
0 6 * * * curl -sf -H "Authorization: Bearer $CRON_SECRET" https://assets.yourdomain.com/api/cron/cleanup
Backup & Restore
Database backup
# Docker
docker compose exec -T db pg_dump -U assettracker assettracker | gzip > backup_$(date +%Y%m%d).sql.gz
# Bare-metal
pg_dump -U assettracker assettracker | gzip > backup_$(date +%Y%m%d).sql.gz
Automated daily backup
crontab -e
# Daily backup at 2 AM, retain 30 days
0 2 * * * cd /opt/assetTracker && docker compose exec -T db pg_dump -U assettracker assettracker | gzip > /backups/at_$(date +\%Y\%m\%d).sql.gz
5 2 * * * find /backups -name "at_*.sql.gz" -mtime +30 -delete
Restore
# Docker
gunzip -c backup_20260309.sql.gz | docker compose exec -T db psql -U assettracker assettracker
# Bare-metal
gunzip -c backup_20260309.sql.gz | psql -U assettracker assettracker
Updating
Docker
cd /opt/assetTracker
git pull origin master
docker compose --profile with-db up -d --build
docker compose exec app npx prisma migrate deploy
Bare-metal / PM2
cd /opt/assetTracker
git pull origin master
npm ci
npx prisma generate
npx prisma migrate deploy
npm run build
pm2 restart assettracker
Vercel
Push to your connected branch. If the build command includes npx prisma migrate deploy, migrations apply automatically.
Monitoring
Logs
# Docker
docker compose logs app --tail 100 -f
# PM2
pm2 logs assettracker
Health check
curl -f http://localhost:3000/api/health || echo "App is down"
Sentry (optional)
Set SENTRY_DSN for automatic error tracking. Add SENTRY_AUTH_TOKEN at build time for source maps.
Security Hardening
Production checklist
NODE_ENV=productionis set- All secrets are unique, generated with
openssl rand SELF_HOSTED=true(disables public landing page and registration)- HTTPS is enforced via reverse proxy
- Database is not exposed publicly (Docker internal network or localhost only)
- Firewall allows only ports 80, 443, and 22 (SSH)
- Docker runs as non-root user (built into the Dockerfile)
CRON_SECRETis set and strong- Database backups are configured and tested
ENCRYPTION_KEYis set (encrypts API keys, MFA secrets, integrations)
Firewall (UFW)
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
Database access (Docker)
For production, remove the PostgreSQL port mapping from docker-compose.yml. The app connects over the Docker network — no host port needed.
# Remove or comment out under the db service:
# ports:
# - "${DB_PORT:-5432}:5432"
Troubleshooting
App won't start
docker compose logs app --tail 50
docker compose exec app npx prisma migrate status
"relation does not exist" error
Migrations haven't been applied:
docker compose exec app npx prisma migrate deploy
Session errors after secret change
If you changed BETTER_AUTH_SECRET, existing sessions become invalid. Users need to clear cookies and log in again.
Database connection refused
docker compose ps db
docker compose logs db --tail 20
docker compose exec db psql -U assettracker -c "SELECT 1"
Build fails with Prisma errors
docker compose exec app npx prisma generate
Email not sending
- Check Admin Settings → Email for configuration status
- Click Send Test Email
- Check logs:
docker compose logs app | grep -i email
Out of disk space
# Check disk
df -h
# Clean Docker cache
docker system prune -a
# Check database size
docker compose exec db psql -U assettracker -c \
"SELECT pg_size_pretty(pg_database_size('assettracker'));"