Getting Started_

Analytics & Insights

Overview

The TaskForge Analytics & Insights module gives you full visibility into how your pipelines, models, and tasks are performing over time. From high-level operational dashboards to granular per-request telemetry, you have the data you need to optimize performance, debug issues, and make informed infrastructure decisions.

The analytics dashboard

The main analytics dashboard is accessible from the left sidebar under Analytics. It provides a real-time overview of your workspace activity, including:

  • Total tasks processed (hourly, daily, weekly)
  • Pipeline success and failure rates
  • Average task latency and p95/p99 percentiles
  • Model inference throughput and error rate
  • Active pipeline count and queue depth

All metrics can be filtered by date range, pipeline, model, or team. The dashboard updates every 30 seconds in real time.

Pipeline performance metrics

For each pipeline, TaskForge tracks a detailed set of execution metrics. To view pipeline-level analytics:

  1. Navigate to Pipelines in the sidebar.
  2. Select a pipeline and click the Analytics tab.
  3. Use the date range selector to focus on a specific time window.

Key metrics available per pipeline include execution duration, step-level breakdown, retry counts, error frequency by step, and data throughput in bytes per second. You can export any metric view as a CSV or PNG chart for reporting purposes.

Model performance tracking

For AI-powered pipelines, the Models analytics panel provides inference-specific metrics including:

  • Latency: Mean, p50, p95, and p99 inference time in milliseconds
  • Accuracy drift: Changes in output confidence scores over time
  • Token usage: Total tokens consumed per model per day
  • Error breakdown: Timeout, invalid input, and model-side errors categorized by type

Accuracy drift alerts can be configured to notify your team via Slack or email when output confidence drops below a defined threshold.

Custom reports

TaskForge allows you to build custom reports by combining any available metric with filters, groupings, and visualization types. Reports can be:

  1. Created from the Analytics → Reports section.
  2. Scheduled for automatic email delivery (daily, weekly, or monthly).
  3. Shared with specific team members or made public within the workspace.
  4. Exported as PDF or CSV for external use.

Custom reports support bar charts, line graphs, scatter plots, and summary tables.

Log explorer

The Log Explorer provides full-text search across all task execution logs within your workspace. You can filter logs by pipeline, task ID, status code, time range, and custom log levels.

Structured logs emitted by your tasks using <taskforge.log()> are automatically indexed and searchable within seconds of execution. This makes the Log Explorer the fastest way to debug a failing task or trace a specific data record through your pipeline.

Alerts & anomaly detection

TaskForge includes a built-in alerting engine that monitors your pipelines for anomalies and notifies you when thresholds are breached. To configure an alert:

  1. Go to Analytics → Alerts.
  2. Click Create Alert and select the metric to monitor.
  3. Define a threshold condition (e.g., error rate greater than 5%).
  4. Choose notification channels (email, Slack, webhook).
  5. Save and activate the alert.

Alerts support both static thresholds and dynamic anomaly detection, which learns from historical patterns to flag unusual behavior automatically.

const next = await fetch("https://api.example.com/next-section");
Black and white grid pattern with black dots at the intersections, forming a repeating checkered design.