Skip to main content

Plugins Panel

Overview

The Plugins panel provides access to n8n, a powerful workflow automation platform integrated into Shakudo. n8n enables users to create automated workflows that connect APIs, services, databases, and other Shakudo stack components through a visual interface. This panel loads the n8n web interface in an embedded view, allowing users to design, execute, and monitor automation workflows without leaving the Shakudo dashboard.

Access & Location

  • Route: ?panel=plugins
  • Navigation: Header (Quick Access) → Plugins
  • Access Requirements:
    • Must have pluginsPanelEnabled feature flag enabled at the platform level
    • No specific RBAC requirements - access follows n8n's own authentication
  • Feature Flags: pluginsPanelEnabled (environment variable: HYPERPLANE__DASHBOARD_PLUGINS_PANEL_ENABLED)

Key Capabilities

Workflow Automation Hub

Access n8n as a central automation hub for orchestrating processes across the Shakudo platform. Design visual workflows that connect multiple services, trigger actions based on events, and automate data pipelines without writing code.

Visual Workflow Designer

Use n8n's drag-and-drop interface to build automation workflows with nodes representing different services, actions, and logic. Connect nodes to create complex automation scenarios that span multiple Shakudo components.

Stack Component Integration

Connect n8n workflows to other Shakudo stack components using internal Kubernetes service URLs. Integrate with databases (Supabase, ClickHouse), AI services (Ollama, LiteLLM), messaging systems (Kafka), observability tools (Langfuse), and more.

Event-Driven Automation

Trigger workflows based on webhooks, scheduled events, Kafka messages, or data changes in connected services. Create responsive automation that reacts to events across your data infrastructure.

Shakudo Pipeline Orchestration

Trigger and monitor Shakudo immediate and scheduled jobs from n8n workflows. Build complex data processing pipelines that combine Shakudo's compute capabilities with n8n's automation logic.

AI Workflow Automation

Query LLM models hosted on Ollama or routed through LiteLLM directly from n8n workflows. Build AI-powered automation for content generation, text summarization, data analysis, and intelligent decision-making.

Data Processing and ETL

Fetch, transform, and load data between different systems. Create ETL workflows that move data between Supabase, ClickHouse, MinIO, and external APIs, with built-in error handling and retry logic.

Monitoring and Observability

Integrate with Langfuse to log and trace workflow executions, API calls, and LLM responses. Track workflow performance and debug issues with detailed execution history.

User Interface

Main View

The Plugins panel displays a full-screen iframe that loads the n8n web interface from the plugins subdomain (e.g., https://plugins.{domain}). The iframe adapts to the dashboard's navigation state:

  • Desktop View: Left padding adjusts based on whether the main navigation drawer is locked (expanded) or collapsed
  • Mobile View: No left padding, full-width display for optimal mobile experience
  • Responsive Layout: Automatically adjusts to viewport size changes

The n8n interface within the iframe provides:

  • Canvas: Visual workflow editor with drag-and-drop nodes
  • Node Panel: Library of available integrations and actions
  • Execution History: View past workflow runs and debug failures
  • Credentials Manager: Securely store API keys and connection details
  • Settings: Configure workflow behavior, error handling, and scheduling

Dialogs & Modals

The Plugins panel itself does not implement any dialogs. All interactions occur within the embedded n8n interface, which has its own modal dialogs for:

  • Node configuration
  • Credential management
  • Workflow settings
  • Execution details
  • Error messages

Tables & Data Grids

No tables are implemented in the panel component. All data visualization occurs within the n8n interface.

Technical Details

GraphQL Operations

Queries: None - The panel uses iframe embedding and does not make direct GraphQL calls

Mutations: None - The panel uses iframe embedding and does not make direct GraphQL calls

Subscriptions: None

Component Structure

  • Main Component: /root/gitrepos/monorepo/apps/hyperplane-dashboard/components/Plugins/PluginPanel.tsx
  • Export Name: N8nPanel (exported as default)

Implementation Details

The panel is implemented as a simple iframe wrapper that:

  1. Constructs the n8n URL from platform parameters: ${protocol}://plugins.${domain}
  2. Applies responsive padding based on navigation drawer state (via DrawerLockedAtom)
  3. Detects mobile viewport using Material-UI's responsive breakpoints
  4. Renders a borderless iframe with 100% height and width

State Management

  • Jotai Atom: DrawerLockedAtom - Tracks whether the main navigation drawer is locked/expanded
  • Context: PlatformParametersContext - Provides domain and protocol for URL construction
  • Responsive State: Material-UI's useMediaQuery for mobile detection

URL Pattern

The n8n instance is hosted on a dedicated subdomain following the pattern:

  • External URL: https://plugins.{domain} (e.g., https://plugins.dev.hyperplane.dev)
  • Protocol: Inherits from platform configuration (typically HTTPS)

Feature Flag Configuration

The panel is only accessible when the platform administrator enables it via environment variable:

HYPERPLANE__DASHBOARD_PLUGINS_PANEL_ENABLED=true

Common Workflows

Automating AI-Powered Reporting

  1. Create a new workflow in n8n
  2. Add a webhook trigger or schedule trigger node
  3. Add an HTTP Request node to query Ollama for AI-generated summaries:
    • URL: http://ollama.hyperplane-ollama.svc.cluster.local:11434
    • Method: POST
    • Include your prompt and model parameters
  4. Add a Supabase node to store the AI-generated results
  5. Add a notification node (Slack, email) to send the report
  6. Activate the workflow and test execution

Event-Driven Data Pipeline

  1. Create a workflow with a Kafka trigger node
  2. Configure the Kafka node with your topic and connection details
  3. Add processing nodes to transform the incoming event data
  4. Add an HTTP Request node to trigger a Shakudo immediate job:
    • Use the Shakudo API to create and monitor the job
    • Pass event data as job parameters
  5. Add a Supabase node to log the pipeline execution
  6. Add a Langfuse node to track observability metrics
  7. Activate the workflow to process events in real-time

Scheduled Data Ingestion from External API

  1. Create a workflow with a Schedule trigger (e.g., daily at midnight)
  2. Add an HTTP Request node to fetch data from an external API
  3. Add data transformation nodes (Set, Function) to clean and format the data
  4. Add a database node (Supabase, ClickHouse) to insert the processed data
  5. Add error handling with conditional logic to retry failures
  6. Add a notification node to alert on success or failure
  7. Activate the workflow and monitor executions

Connecting Multiple Stack Components

  1. Map out your workflow across Supabase, Ollama, LiteLLM, and Appsmith
  2. Create a workflow that:
    • Fetches data from Supabase (PostgreSQL node)
    • Processes data with AI using Ollama or LiteLLM (HTTP Request nodes)
    • Stores enriched data back to Supabase
    • Updates an Appsmith dashboard via API call
  3. Configure internal Kubernetes service URLs for all components:
    • Supabase: postgresql://<user>:<pass>@supabase.hyperplane-supabase:5432/<db>
    • Ollama: http://ollama.hyperplane-ollama.svc.cluster.local:11434
    • LiteLLM: http://litellm.hyperplane-litellm.svc.cluster.local:4000
    • Langfuse: http://langfuse.hyperplane-langfuse.svc.cluster.local:3000
  4. Test each connection individually before running the full workflow
  5. Monitor execution logs in n8n and observability in Langfuse

Triggering and Monitoring Shakudo Jobs

  1. Create a workflow with an appropriate trigger (webhook, schedule, event)
  2. Add an HTTP Request node to create a Shakudo immediate job:
    • Use the Shakudo GraphQL API or REST endpoint
    • Include job parameters (script, environment, resources)
  3. Add a Wait node or polling logic to monitor job completion
  4. Add conditional logic to handle job success or failure
  5. Add follow-up actions based on job results (notifications, data processing)
  6. Store job metadata in Supabase for audit trail
  7. Activate and test the workflow with a test job
  • Stack Components - Install and manage n8n and other automation tools
  • Jobs - Shakudo pipeline jobs that can be triggered from n8n workflows
  • Scheduled Jobs - Recurring jobs that can complement n8n automation
  • Services - Long-running services that n8n can interact with
  • Secrets - Store API credentials used by n8n workflows securely

Notes & Tips

Best Practices

  • Use Internal URLs: Always connect to stack components using Kubernetes internal service URLs (e.g., http://service.namespace.svc.cluster.local:port) for better performance and security
  • Store Credentials Securely: Use n8n's credential manager to store API keys and database passwords instead of hardcoding them in workflows
  • Error Handling: Add error handling nodes to workflows to gracefully handle failures and retry transient errors
  • Test Incrementally: Build workflows step-by-step, testing each node individually before connecting them together
  • Monitor Execution History: Regularly review workflow execution logs to identify bottlenecks and failures
  • Use Langfuse: Integrate Langfuse for observability when building AI-powered workflows to track LLM usage and performance

Integration Examples

Connecting to Supabase:

Host: supabase.hyperplane-supabase
Port: 5432
Database: postgres (or your database name)
User: postgres (or your username)
Password: (use n8n credentials manager)

Querying Ollama:

URL: http://ollama.hyperplane-ollama.svc.cluster.local:11434/api/generate
Method: POST
Body: {
"model": "llama3.2",
"prompt": "Your prompt here",
"stream": false
}

Using LiteLLM Gateway:

URL: http://litellm.hyperplane-litellm.svc.cluster.local:4000/chat/completions
Method: POST
Headers: Authorization: Bearer <your-api-key>
Body: OpenAI-compatible format

Common Use Cases

  1. Automated AI Workflows

    • Process user inputs and query Ollama for content generation
    • Use LiteLLM to route requests across multiple LLM providers
    • Store AI-generated results in Supabase for application use
  2. Data Ingestion and ETL

    • Fetch data from external APIs on a schedule
    • Transform and clean data using n8n's built-in functions
    • Load data into Supabase, ClickHouse, or MinIO
  3. Pipeline Orchestration

    • Trigger Shakudo immediate jobs when new data arrives
    • Monitor job completion and process results
    • Chain multiple jobs together with conditional logic
  4. Event-Driven Processing

    • Subscribe to Kafka topics and process messages
    • React to database changes via webhooks
    • Trigger workflows based on external system events
  5. Observability and Monitoring

    • Log all workflow executions to Langfuse
    • Track LLM usage and costs
    • Send alerts to Slack or email on failures

Performance Considerations

  • Workflow Complexity: Complex workflows with many nodes may have longer execution times - consider breaking them into smaller, chained workflows
  • External API Calls: Network latency to external APIs can slow down workflows - use appropriate timeout settings
  • Polling vs Webhooks: Prefer webhook triggers over polling when possible to reduce resource usage
  • Concurrent Executions: n8n supports concurrent workflow executions, but be mindful of rate limits on connected services
  • Data Volume: For large data processing tasks, consider triggering Shakudo jobs instead of processing directly in n8n

Troubleshooting Common Issues

  • Connection Timeouts: Verify internal service URLs are correct and services are running in their namespaces
  • Authentication Failures: Check that credentials are properly configured in n8n's credential manager
  • Workflow Stuck: Check the execution log for the specific node that's blocking - may need to adjust timeout settings
  • Missing Data: Verify that the previous node's output format matches the expected input format of the next node
  • Service Not Accessible: Ensure the stack component is installed, active, and has the correct service name in its namespace

Important Limitations

  • Iframe Limitations: The n8n interface runs in an iframe, which may have some browser restrictions on cookies, storage, or popups depending on browser security settings
  • Authentication: n8n authentication is separate from Shakudo dashboard authentication - users need n8n credentials to access workflows
  • Feature Flag Required: The Plugins panel must be explicitly enabled by platform administrators via environment variable
  • No Direct GraphQL: The panel does not interact with Shakudo's GraphQL API directly - all n8n operations happen within the iframe
  • Subdomain Dependency: Requires a properly configured plugins subdomain pointing to the n8n instance

Installation and Setup

n8n must be installed as a Shakudo stack component before the Plugins panel becomes useful. The installation process:

  1. Install n8n via Stack Components panel or Helm chart
  2. Configure the n8n namespace (typically hyperplane-n8n)
  3. Set up Keycloak redirect URLs for SSO integration (optional)
  4. Create GraphQL mutation to register n8n as a platform app
  5. Enable the pluginsPanelEnabled feature flag
  6. Access n8n through the Plugins panel

For detailed installation instructions, refer to the n8n stack component documentation in /stack-components/n8n/.

Workflow Development Tips

  • Start Simple: Begin with basic workflows and gradually add complexity
  • Use the Manual Trigger: Test workflows manually before activating production triggers
  • Version Control: Export workflows as JSON files and store them in git for versioning
  • Documentation: Add notes to workflow nodes to document logic and integration details
  • Naming Conventions: Use clear, descriptive names for workflows and nodes
  • Resource Monitoring: Monitor workflow execution times and resource usage to optimize performance