Multi-site experiments with bandwidth-reserved L2PTP tunnels and a DPDK traffic shaper. Evaluates TCP CUBIC, BBRv1, and BBRv3 for large-scale data transfers with a focus on FCT efficiency, predictability, and bandwidth overhead.
Deploy a self-hosted vLLM + LiteLLM + Nginx stack on a GPU-equipped FABRIC node with automatic GPU detection, model selection, and OpenAI-compatible API access via public IP or SSH tunnel.
Jupyter notebooks to flash and initialize FPGAs for ESnet or NEU workflows. Run one of these notebooks first to ensure your FPGA is correctly configured before running custom code.
This notebook coincides with the Mastering FABRIC: Tips and Tricks webinar titled GPU Nodes on Federated Testbeds Workflow. This notebook demonstrates how to reserve and utilize GPUs at Chameleon with data transfer through a DTN.
This artifact provides an example of deploying an Ollama slice on FABRIC for running large language models (LLMs). It includes setup instructions, Jupyter notebooks, and configuration files to launch an Ollama node, deploy LLM services, and run queries.