Getting Started_

Integration

Overview

TaskForge is designed to work alongside the tools your team already uses. Through a combination of native connectors, webhooks, and the TaskForge REST API, you can integrate TaskForge with virtually any service in your stack — from data warehouses and message queues to third-party SaaS platforms and internal microservices.

Native connectors

TaskForge ships with a library of pre-built connectors for the most commonly used platforms. These connectors handle authentication, data mapping, and error handling out of the box, so you can get integrated in minutes rather than days.

Currently available native connectors include:

  • Data sources: PostgreSQL, MySQL, MongoDB, BigQuery, Snowflake, Redshift
  • Message queues: Apache Kafka, RabbitMQ, AWS SQS, Google Pub/Sub
  • Storage: AWS S3, Google Cloud Storage, Azure Blob Storage
  • Productivity: Slack, Microsoft Teams, Notion, Linear, Jira
  • Monitoring: Datadog, Grafana, PagerDuty, Sentry

To enable a connector, navigate to Settings → Integrations, select the service, and follow the connection wizard.

Webhook integration

Webhooks allow external services to push real-time event data into your TaskForge pipelines. To create a webhook endpoint:

  1. Go to Settings → Integrations → Webhooks.
  2. Click Create Endpoint and give it a descriptive name.
  3. Copy the generated endpoint URL and paste it into the sending service's webhook configuration.
  4. Select the event types you want to receive.
  5. Optionally configure a signing secret to verify incoming requests.

Incoming webhook payloads are available as variables within your pipeline tasks and can be used to trigger conditional logic, route data, or populate task parameters dynamically.

REST API integration

The TaskForge REST API allows you to programmatically manage resources, trigger pipelines, and retrieve results from any external system.

All API endpoints are documented in the API Reference section.

A basic pipeline trigger via the API looks like this:

1POST /v2/pipelines/{pipeline_id}/trigger
2Authorization: Bearer YOUR_API_KEY
3Content-Type: application/json 
4{ "input": { "source": "external_trigger", "payload": {  
5} } }

Custom connector development

If a native connector does not exist for your service, you can build a custom connector using the TaskForge Connector SDK. Custom connectors are defined as JavaScript or TypeScript modules that implement a standard interface for reading, writing, and streaming data.

To scaffold a new connector:

taskforge connector create my-connector

This generates a boilerplate connector project with authentication helpers, input/output schema definitions, and test utilities. Once built, custom connectors can be published to your private workspace registry and reused across multiple pipelines.

Data transformation

When integrating systems with different data shapes, TaskForge provides a built-in transformation layer. You can define transformation rules using JSONata expressions or custom JavaScript functions that run inline within your pipeline steps.

Transformations support field renaming, type casting, conditional mapping, array flattening, and nested object restructuring — covering the majority of integration data mapping needs without external tooling.

const next = await fetch("https://api.example.com/next-section");
Black and white grid pattern with black dots at the intersections, forming a repeating checkered design.