FABRIC AI Automation Stack: N8N, Ollama, & Open WebUI ( public ) ( PegasusAI )
This FABRIC artifact provides a complete, self-contained environment for building AI-driven workflows. By provisioning a VM with dedicated GPU access and robust networking, it delivers a powerful platform for experimentation with automation and local Large Language Models (LLMs).
Core Service Stack
- N8N (Root /): The primary workflow automation tool, accessible at the root path of the public IP. It connects seamlessly with the local Ollama instance for AI tasks.
- Ollama: The engine for serving LLMs (like Llama 3) directly on the VM's GPU, providing low-latency inference for both n8n and Open WebUI.
- Open WebUI : A modern, web-based chat interface used to interact directly with the Ollama-served LLMs.
- Postgres Database: Provides persistent, scalable data storage for the n8n application.
- Nginx Reverse Proxy: Manages TLS termination (HTTPS on port 443) and routes traffic securely to all services.
FABRIC Networking Configuration
- FabNetv4Ext (Public Access): Connects the slice to the external network, providing the public IP address necessary to access the Nginx proxy via port 443.
- FabNetv4 (Internal Interoperability): Ensures the VM has direct access to the standard FABRIC Measurement and Control Plane (MCP) network, enabling seamless connectivity to future FABRIC resources deployed on this subnet.
- Access URL: All services are accessed through the Nginx endpoint:
https://<VM_PUBLIC_IP>:443/.
GPU Acceleration
The VM is specifically provisioned with a GPU to meet the computational demands of the Ollama service. This ensures rapid loading and execution of LLMs, making the overall automation stack highly responsive and efficient for AI-related tasks.
7
1
(0)
1
Nov. 7, 2025, 2:32 p.m.
Nov. 7, 2025, 2:32 p.m.
Versions
| 2025-10-30.1 | Oct. 30, 2025, 9:07 p.m. | urn:fabric:contents:renci:20f539e7-1eab-4ab1-bc7c-f2fdf74ea071 | 1 | download |
Authors
- Komal Thareja , University of North Carolina at Chapel Hill (kthare10@email.unc.edu)