Example: Using Inference Plugins
This example shows how to use an LLM inference plugin to route model calls through a custom provider configuration instead of the default Pipelex gateway.
Get the code
What it demonstrates
- Referencing an LLM plugin by name in the
modelfield - How plugins allow you to use your own API keys and provider configuration
- Minimal change from a standard
PipeLLMpipe
The Method: hello_plugin.mthds
domain = "hello_plugin"
main_pipe = "hello_plugin"
[pipe]
[pipe.hello_plugin]
type = "PipeLLM"
description = "Write text about Hello World."
output = "Text"
model = { model = "llm_plugin_example_using_openai", temperature = 0.5 }
prompt = """
Write a haiku about Hello World.
"""
The key difference from the standard Hello World example is the model field: instead of using a default gateway model, it references llm_plugin_example_using_openai — a plugin defined in your project's configuration.
How to run
-
Configure the LLM plugin in your project's
.pipelex/pipelex.toml(see the Configuration docs for details). -
Run the example:
pipelex run bundle examples/c_advanced/using_inference_plugins/hello_plugin.mthds
Related Documentation
- LLM Integration - Overview of LLM integration and plugins
- Configuration System - How to configure providers and plugins
- Pipelex Gateway & Model Access - Default model access through the gateway