This artifact provides an example of deploying an Ollama slice on FABRIC for running large language models (LLMs). It includes setup instructions, Jupyter notebooks, and configuration files to launch an Ollama node, deploy LLM services, and run queries.