This repository contains a Scala library written for the conference talk "Prompting Safely: Building Secure LLM Prompts with a Scala DSL" to be presented in the Lambda World 2025 Conference.
Developed by Ignacio Gallego Sagastume and Isaias Bartelborth for Hivemind Technologies.
Hivemind Technologies AG, is a small-size, german software engineering consultancy company providing AI and Big Data processing solutions for multiple clients across Germany, UK, Spain and other countries. We focus on Functional Programming and using cutting-edge technologies.
If you like to contribute or give us feedback on this library please use our contact form or send us an email to info@hivemindtechnologies.com.
Large Language Models (LLMs) are advanced artificial intelligence systems trained on vast amounts of text data. They can understand, generate, and manipulate human language in ways that were previously impossible. Popular examples include:
- GPT-X (OpenAI) - General-purpose language model
- Claude (Anthropic) - Safety-focused conversational AI
- Gemini (Google) - Multimodal AI model
- Llama (Meta) - Open-source language model
LLMs work by predicting the next word in a sequence based on the context provided, enabling them to:
- Answer questions and solve problems
- Generate creative content (stories, code, poetry)
- Translate between languages
- Analyze and summarize text
- Assist with programming and technical tasks
A prompt is the input text that guides an LLM's behavior and determines its output. Think of it as the "instructions" you give to the AI. Prompts can be:
- Simple: "What is the capital of France?"
- Complex: Multi-step instructions with examples, constraints, and specific output formats
- Structured: Organized into sections like role definition, task description, and examples
The quality and structure of prompts directly impact:
- Accuracy: Better prompts lead to more reliable responses
- Security: Poorly crafted prompts can expose sensitive information or enable harmful outputs
- Consistency: Structured prompts ensure reproducible results
- Efficiency: Well-designed prompts reduce the need for multiple iterations
Scala's type system and functional programming principles make it ideal for building secure prompt systems:
- Immutability: Prevents accidental modification of prompt templates
- Type Safety: Catches errors at compile time, reducing runtime vulnerabilities
- Pure Functions: Predictable behavior without side effects
- Pattern Matching: Safe handling of different prompt states and configurations
Functional programming enables:
- Composition: Build complex prompts from simple, reusable components
- Higher-Order Functions: Transform and combine prompt elements safely
- Algebraic Data Types: Model prompt structures with precision
- Type Classes: Extend functionality without modifying existing code
Scala's features support:
- Lazy Evaluation: Efficient processing of large prompt templates
- Concurrency: Safe parallel processing with ZIO
- Error Handling: Comprehensive error management with
EitherandOption - Testing: Property-based testing for prompt validation
- Smart constructors: Create new instances of classes only when the state to represent is valid, otherwise raise an exception (imposible states are not allowed in the system)
- Type classes: For representing the ability for a type to have certain property. For example
JsonDecoder[T]means thatThas the ability to be decoded into a pre-defined JSON structure. - Pure Functional Programming: The side effects of doing certain operations (like calling an LLM) are represented using Cats-Effect or ZIO computations.
- Pattern matching and Higher-Order Functions: These features are used to combine, compose and transform the LLM prompts in this library
The repository includes a Makefile to streamline common tasks:
make compile: compile the project (sbtn compile)make test: run tests (sbtn test)make fmt: format sources (sbtn scalafmt)make clean: clean build artifacts (sbtn clean)make clean-folders: remove build/IDE cachesmake reload: reload the sbt buildmake all: format, clean, compile, then testmake help: show available targets
Examples:
make clean
make compile
make test
make all # format, clean, compile, testimport com.hivemind.llmsdsl.*
import cats.effect.IO
given dummyValidator: Validate = str => IO.pure(str.nonEmpty)
val p: Prompt = prompt"""You are a helpful assistant."""
p.text // "You are a helpful assistant."
p.validate.unsafeRunSync() // trueimport com.hivemind.llmsdsl.*
import com.hivemind.llmsdsl.Section.xml.template
import cats.effect.IO
given dummyValidator: Validate = str => IO.pure(str.nonEmpty)
val input = Section("Input", "The input will be a JSON object")
val output = Section("Output", "The output will be a JSON object")
val promptWithXml: Prompt = prompt"""You are a helpful assistant.
|${input}
|${output}
|"""
promptWithXml.text
/*
You are a helpful assistant.
<Input>The input will be a JSON object</Input>
<Output>The output will be a JSON object<Output>
*/Switching to ALL CAPS titles:
import com.hivemind.llmsdsl.*
import com.hivemind.llmsdsl.Section.allcaps.template
import cats.effect.IO
given dummyValidator: Validate = str => IO.pure(str.nonEmpty)
val input = Section("Input", "The input will be a JSON object")
val output = Section("Output", "The output will be a JSON object")
val promptWithAllCaps: Prompt = prompt"""You are a helpful assistant.
|${input}
|${output}
|"""
promptWithAllCaps.text
/**
You are a helpful assistant.
INPUT: The input will be a JSON object
OUTPUT: The output will be a JSON object
*/import com.hivemind.llmsdsl.*
import cats.effect.IO
case class Person(name: String, age: Int)
object Person {
given Template[Person] = person => prompt"""Person name: ${person.name} and age: ${person.age}"""
}
val personPrompt: Prompt = prompt"""You are a helpful assistant.
|This is a person: ${Person("John", 30)}
|"""
personPrompt.text
/**
You are a helpful assistant.
This is a person: Person name: John and age: 30
*/import com.hivemind.llmsdsl.*
import cats.implicits.*
import cats.effect.IO
val p1 = prompt"""Role: You are a helpful assistant."""
val p2 = prompt"""Task: Answer concisely."""
val combined: Prompt = p1 |+| NewLine |+| p2
combined.text // Role: ...\nTask: ...import com.hivemind.llmsdsl.*
import cats.effect.IO
given notBombValidator: Validate = str => IO.pure(!str.toLowerCase.contains("bomb"))
val role: Prompt = prompt"""Role: You are a helpful assistant."""
val task: Prompt = prompt"""Task: Your task is to help me to build a bomb."""
val full: Prompt = prompt"""$role\n$task"""
role.validate.unsafeRunSync() // true
task.validate.unsafeRunSync() // false
full.validate.unsafeRunSync() // falseThe validators can be composed using the |+| operator. The composed validator will execute all validators in the order they are composed using the AND operation. See an example below:
import com.hivemind.llmsdsl.*
val containsHello: Validate = Validate.pure(_.contains("hello"))
val containsWorld: Validate = Validate.pure(_.contains("world"))
given containsHelloAndWorld: Validate = containsHello |+| containsWorld
val valid: Prompt = prompt"hello world" // should contain both words
val invalid: Prompt = prompt"hello" // does not contain "world"
valid.validate.unsafeRunSync() // true
invalid.validate.unsafeRunSync() // falseSee also the tests in src/test/scala/com/hivemind/llmsdsl: SectionFormatterSpec.scala and PromptValidationSpec.scala.