Environment variables
All settings can be set via environment variables using the formatCONTROLFLOW_<setting name>.
For example, to set the default LLM model to gpt-4o-mini and the log level to DEBUG, you could set the following environment variables:
.env file. By default, ControlFlow will look for a .env file at ~/.controlflow/.env, but you can change this behavior by setting the CONTROLFLOW_ENV_FILE environment variable.
Runtime settings
You can examine and modify ControlFlow’s settings at runtime by inspecting or updating thecontrolflow.settings object. Most — though not all — changes to settings will take effect immediately. Here is the above example, but set programmatically:
Available settings
Home settings
home_path: The path to the ControlFlow home directory. Default:~/.controlflow
Display and logging settings
log_level: The log level for ControlFlow. Options:DEBUG,INFO,WARNING,ERROR,CRITICAL. Default:INFOlog_prints: Whether to log workflow prints to the Prefect logger by default. Default:Falselog_all_messages: If True, all LLM messages will be logged at the debug level. Default:Falsepretty_print_agent_events: If True, a PrintHandler will be enabled and automatically pretty-print agent events. Note that this may interfere with logging. Default:True
Orchestration settings
orchestrator_max_agent_turns: The default maximum number of agent turns per orchestration session. If None, orchestration may run indefinitely. This setting can be overridden on a per-call basis. Default:100orchestrator_max_llm_calls: The default maximum number of LLM calls per orchestrating session. If None, orchestration may run indefinitely. This setting can be overridden on a per-call basis. Default:1000task_max_llm_calls: The default maximum number of LLM calls over a task’s lifetime. If None, the task may run indefinitely. This setting can be overridden on a per-task basis. Default:None
LLM settings
llm_model: The default LLM model for agents. Default:openai/gpt-4ollm_temperature: The temperature for LLM sampling. Default:0.7max_input_tokens: The maximum number of tokens to send to an LLM. Default:100000
Debug settings
debug_messages: If True, all messages will be logged at the debug level. Default:Falsetools_raise_on_error: If True, an error in a tool call will raise an exception. Default:Falsetools_verbose: If True, tools will log additional information. Default:True
Experimental settings
enable_experimental_tui: If True, the experimental TUI will be enabled. If False, the TUI will be disabled. Default:Falserun_tui_headless: If True, the experimental TUI will run in headless mode, which is useful for debugging. Default:False
Prefect settings
These are default settings for Prefect when used with ControlFlow. They can be overridden by setting standard Prefect environment variables.prefect_log_level: The log level for Prefect. Options:DEBUG,INFO,WARNING,ERROR,CRITICAL. Default:WARNING