Omaship

March 21, 2026 . 14 min read

Error Tracking and Monitoring for Rails SaaS in 2026: Sentry, AppSignal, Honeybadger, and the Setup That Saves Your Ass at 3AM

Jeronim Morina

Jeronim Morina

Founder, Omaship

Every Rails app looks stable right until production starts doing weird little crimes. A job retries forever. A checkout webhook fails silently. One tenant gets a 500 because of a nil three associations deep. Error tracking is how you find the fire. Monitoring is how you find the smoke before the fire starts.

Most founders get this backward. They either install five dashboards and drown in noise, or they ship with nothing and discover bugs from angry customer emails. Both approaches are dumb. The goal is not "full observability." The goal is simple: when production breaks, you should know what broke, who it affects, and whether revenue is on fire within minutes.

Here is the practical setup that works for a Rails 8 SaaS in 2026.

The three layers you actually need

Monitoring gets messy when people mix different jobs into one bucket. Rails SaaS needs three distinct layers:

  • Error tracking -- exceptions, failed jobs, broken requests, controller crashes, third-party API errors. This answers: what just blew up?
  • Performance monitoring -- slow requests, N+1 queries, queue latency, memory pressure, external API timing. This answers: what is degrading before it becomes an outage?
  • Health monitoring -- uptime checks, deploy checks, database/storage capacity, background worker heartbeat. This answers: is the app alive and reachable right now?

If you only install error tracking, you will learn about problems after users hit them. If you only install uptime checks, you will know the homepage returns 200 while billing has been broken for six hours. If you only install APM, you will have gorgeous charts and no idea why emails stopped sending. Use all three layers. Keep each one minimal.

Sentry vs AppSignal vs Honeybadger

These are the three serious options for most Rails SaaS teams. They all work. They are not equal.

Sentry

Best for: teams that want deep exception context, performance tracing, and flexible alerting with room to grow.

  • Strengths: excellent error grouping, stack traces, release tracking, source maps if you add frontend JS later, strong ecosystem, good tracing.
  • Weaknesses: can become noisy if you do not tune ignored errors, performance features are powerful but easy to over-collect.
  • Take: the default recommendation if you want one tool that can grow from "exceptions only" into real observability.

AppSignal

Best for: Rails teams that care about performance first and want a very polished Ruby experience.

  • Strengths: beautiful APM UX, great slow query insights, request timelines, anomaly detection, Rails-native feel.
  • Weaknesses: exception tracking is good, but Sentry still feels sharper for debugging ugly production crashes.
  • Take: the strongest choice if performance and throughput are your main pain point.

Honeybadger

Best for: smaller SaaS teams who want a clean, focused setup with less dashboard bloat.

  • Strengths: simple setup, good exception reporting, uptime checks included, less overwhelming than Sentry.
  • Weaknesses: weaker performance tooling and less depth for tracing complex systems.
  • Take: great if you want sane defaults and do not need enterprise-level tracing gymnastics.

My opinionated recommendation: start with Sentry + a basic uptime check. It gives you the most leverage per minute of setup. If you later hit serious latency or throughput issues, add AppSignal or upgrade your existing APM coverage. Do not start with two overlapping monitoring vendors because you read a thread from someone with a seven-person platform team.

The minimum production setup

For a new Rails SaaS, this is enough:

  1. One error tracker (Sentry, AppSignal, or Honeybadger).
  2. One uptime/health endpoint that checks web, database, queue, and cache basics.
  3. Alerting only for actionable failures -- new exception spike, failed deploy, queue backlog, failed payment/webhook path, app unreachable.
  4. Release tagging so every error shows which deploy caused it.
  5. User context so support can answer "who is affected?" without detective work.

Not included on purpose: distributed tracing across seven microservices, custom metrics for every model callback, dashboards with 48 panels, or Slack alerts for every 404. That is how you build a monitoring system nobody trusts.

Installing Sentry in Rails 8

The Sentry setup is boring, which is exactly what you want.

# Gemfile
gem "sentry-ruby"
gem "sentry-rails"

after_bundle do
  puts "Set SENTRY_DSN in credentials or ENV"
end
# config/initializers/sentry.rb
Sentry.init do |config|
  config.dsn = Rails.application.credentials.dig(:sentry, :dsn) || ENV["SENTRY_DSN"]
  config.breadcrumbs_logger = [:active_support_logger, :http_logger]
  config.enabled_environments = %w[production staging]
  config.traces_sample_rate = 0.1

  config.before_send = lambda do |event, hint|
    exception = hint[:exception]

    if exception.is_a?(ActiveRecord::RecordNotFound)
      nil
    else
      event
    end
  end
end

Two important calls here. First, enabled_environments keeps development noise out of production dashboards. Second, before_send drops expected junk like ActiveRecord::RecordNotFound if your app already handles it cleanly. If you do not filter obvious non-problems, your alert feed becomes wallpaper.

Add user, account, and release context

Raw stack traces are not enough. Production debugging gets dramatically faster when every event tells you which user, which tenant, and which release was involved.

# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
  before_action :set_error_context

  private

  def set_error_context
    return unless defined?(Sentry)

    Sentry.set_user(id: Current.user.id, email: Current.user.email) if Current.user
    Sentry.set_tags(account_id: Current.account.id) if defined?(Current.account) && Current.account
    Sentry.set_tags(release: ENV["GIT_SHA"]) if ENV["GIT_SHA"].present?
  end
end

Now an exception is no longer "undefined method on NilClass." It is "undefined method on NilClass for account 42 after release 3e75da2, affecting user [email protected]." That difference is the difference between a five-minute fix and a two-hour archaeological dig.

Background jobs are where bugs go to hide

Rails SaaS apps keep moving more critical work into background jobs: email delivery, webhook processing, report generation, billing syncs, imports, AI tasks, backfills. That is great for request latency and terrible for discoverability if you do not monitor job failures properly.

Solid Queue gives you a durable queue. It does not give you operational awareness by itself. You still need visibility into retries, dead jobs, and queue backlog.

# app/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
  around_perform do |job, block|
    Sentry.with_scope do |scope|
      scope.set_tags(job: job.class.name, queue: job.queue_name)
      block.call
    end
  end
end

This ensures failed jobs carry queue-specific context. More importantly, monitor queue latency itself. If jobs are not failing but are waiting 25 minutes to start, users still experience a broken product.

Health checks beat vibes

Every production app needs a machine-readable health endpoint. Not a homepage ping. A real health check.

# config/routes.rb
get "/health", to: "health#show"

# app/controllers/health_controller.rb
class HealthController < ActionController::Base
  def show
    ActiveRecord::Base.connection.execute("SELECT 1")
    head :ok
  rescue => error
    Rails.logger.error("Health check failed: #{error.class}: #{error.message}")
    head :service_unavailable
  end
end

This is the bare minimum. A better version also checks queue database connectivity, disk pressure, and a recent worker heartbeat. But even the simple endpoint is valuable because your uptime provider can alert you when deploys break boot, migrations wedge the app, or the database connection pool dies.

What to alert on

Alerts should wake you up only when action matters. Good alerts:

  • New exception spike after a deploy.
  • Checkout or billing failure rate increases.
  • Webhook processing backlog grows past a threshold.
  • App unreachable for more than 1-2 minutes.
  • Database or disk capacity nearly full.

Bad alerts:

  • single 404s
  • single timeout from a flaky third-party API that auto-retried successfully
  • every background job retry
  • every crawler-induced weird request to /wp-admin

If your alert channel screams all day, you will ignore it the one time revenue is actually leaking. Monitoring systems die from false positives more often than from missing features.

The mistakes founders keep making

  • No release tracking. If you cannot map an error spike to the exact deploy, rollback decisions become guesswork.
  • No user or tenant context. Support asks "who is affected?" and engineering has no clue.
  • Ignoring background jobs. Web requests look fine while all your async work is on fire.
  • Alerting on symptoms, not business impact. CPU at 80% matters less than failed signups or broken billing.
  • Assuming logs are enough. Raw logs are a haystack. Error trackers group, dedupe, enrich, and prioritize.

A sane monitoring stack for different stages

Stage 1: New product, pre-PMF

Sentry or Honeybadger, one health endpoint, one uptime ping, and one alert route to email or Slack. That is enough.

Stage 2: Paying customers, async workflows matter

Add queue latency tracking, webhook failure alerts, and release tagging tied to CI/CD. This is where real operational maturity starts.

Stage 3: B2B, revenue-sensitive, multiple critical integrations

Add APM depth, database performance breakdowns, external service timing, and synthetic checks for key flows like signup and checkout. Still do not build a dashboard museum.

How Omaship fits this

Omaship already leans into the stack that makes monitoring sane: Rails 8, server-rendered pages, Solid Queue, and a predictable CI/CD pipeline. That means you can add Sentry or AppSignal in minutes, attach release SHAs to deploys, and instrument the handful of flows that actually matter instead of deciphering a spaghetti stack you half vibe-coded into existence.

More importantly, the codebase stays legible for both humans and agents. When your monitoring tool points to a failure, you can actually fix it without spelunking through three abstraction pyramids and a "helpers" folder from hell.

The simple rule

Monitoring is not there to impress other developers. It is there to shorten the time between "something is wrong" and "it is fixed." Install one error tracker. Add one real health endpoint. Tag releases. Capture user context. Monitor queue lag. Then go build the product.

Ship with monitoring that catches problems before customers do.

Omaship gives you a Rails 8 foundation that stays debuggable in production, with clean CI/CD, Solid Queue, and conventions that make error tracking and health checks straightforward instead of bolted on.

Start building

Continue reading

We use analytics and session recordings to learn which parts of Omaship help and which need work. Accept all, or customize what you share.

Privacy policy