Setting Up Your Project
Installation
Just like most python packages you can easily install Dandy using pip.
Info
The Dandy package will also install
httpx,
pydantic and
python-dotenv packages.
Creating a Settings File
You can create a dandy_settings.py file in the root of your project with the following contents.
import os
from pathlib import Path
ALLOW_RECORDING_TO_FILE = True
BASE_PATH = Path.resolve(Path(__file__)).parent
LLM_CONFIGS = {
'DEFAULT': {
'TYPE': 'ollama',
'HOST': os.getenv("OLLAMA_HOST"),
'PORT': int(os.getenv("OLLAMA_PORT", 11434)),
'API_KEY': os.getenv("OLLAMA_API_KEY"),
'MODEL': 'llama3.1:8b-instruct-q4_K_M',
},
'LLAMA_3_2_3B': {
'MODEL': 'llama3.2:3b-instruct-q4_K_M',
},
'GPT_4o': {
'TYPE': 'openai',
'HOST': os.getenv("OPENAI_HOST"),
'PORT': int(os.getenv("OPENAI_PORT", 443)),
'API_KEY': os.getenv("OPENAI_API_KEY"),
'MODEL': 'gpt-4o',
},
}
This configuration allows us to use both Ollama and OpenAI as our LLM services.
The DEFAULT in the LLM_CONFIGS will be used when no other config is specified for any llm actions.
Tip
Once the DEFAULT config is specified, the TYPE, HOST, PORT AND API_KEY from the DEFAULT config will flow to the other configs if they are not specificed.
Environment Variables
The DANDY_SETTINGS_MODULE environment variable can be used to specify the settings module to be used.
Note
If the DANDY_SETTINGS_MODULE environment variable is not set, the system will default to look for a dandy_settings.py file in the current working directory or sys.path.
More Settings
There are more settings you can configure in your project see below for more information.
import os
from pathlib import Path
AGENT_DEFAULT_PLAN_TIME_LIMIT_SECONDS: int | None = 600
AGENT_DEFAULT_PLAN_TASK_COUNT_LIMIT: int | None = 100
ALLOW_RECORDING_TO_FILE: bool = False
BASE_PATH: Path | str = Path.cwd()
CACHE_MEMORY_LIMIT: int = 1000
CACHE_SQLITE_DATABASE_PATH: Path | str = BASE_PATH
CACHE_SQLITE_LIMIT: int = 10000
DEBUG: bool = False
FUTURES_MAX_WORKERS: int = 10
HTTP_CONNECTION_RETRY_COUNT: int = 4
HTTP_CONNECTION_TIMEOUT_SECONDS: int | None = 60
LLM_DEFAULT_MAX_INPUT_TOKENS: int = 8000
LLM_DEFAULT_MAX_OUTPUT_TOKENS: int = 4000
LLM_DEFAULT_PROMPT_RETRY_COUNT: int | None = 2
LLM_DEFAULT_RANDOMIZE_SEED: bool = False
LLM_DEFAULT_REQUEST_TIMEOUT: int | None = None
LLM_DEFAULT_SEED: int = 77
LLM_DEFAULT_TEMPERATURE: float = 0.7
LLM_CONFIGS = {
'DEFAULT': {
'TYPE': 'ollama',
'HOST': os.getenv("OLLAMA_HOST"),
'PORT': int(os.getenv("OLLAMA_PORT", 11434)),
'API_KEY': os.getenv("OLLAMA_API_KEY"),
'MODEL': 'a_model:9b',
'TEMPERATURE': 0.5,
'SEED': 77,
'RANDOMIZE_SEED': False,
'MAX_INPUT_TOKENS': 8000,
'MAX_OUTPUT_TOKENS': 4000
},
}