Setup
From zero to a running Yggdrasil ecosystem in minutes. You need Python 3.10+, LM Studio, and a Telegram bot token.
Prerequisites
| Requirement | Details |
|---|---|
| Python 3.10+ | Tested on Python 3.10 and 3.11. Windows, Linux, and WSL supported. |
| LM Studio | Download from lmstudio.ai. Free. Runs models locally. |
| Telegram Bot Token | Create a bot via @BotFather. Keep the token secret. |
| Hardware | 16GB RAM minimum. 32GB+ recommended. NVIDIA GPU optional but helpful. |
Installation
1. Clone the repository
$ git clone https://github.com/BrierAinz/Yggdrasil.git
$ cd Yggdrasil
2. Configure environment
Copy the example environment file and fill in your values.
$ cp Asgard/Lilith/.env.example Asgard/Lilith/.env
$ notepad Asgard/Lilith/.env # Windows
$ nano Asgard/Lilith/.env # Linux/WSL
Your .env should look like this:
LM_STUDIO_URL=http://localhost:1234/v1
MODEL=auto
TELEGRAM_BOT_TOKEN=your_b...here
TELEGRAM_OWNER_CHAT_ID=your_telegram_chat_id
LILITH_PATH=D:\\Proyectos\\Yggdrasil\\Asgard\\Lilith
LILITH_BASE_DIR=D:\\Proyectos\\Yggdrasil
LOG_LEVEL=INFO
DEVICE=cuda
3. Start LM Studio
Open LM Studio, load a model (recommended: 13B+ parameters), and start the local server on port 1234. Set the API to "OpenAI-compatible".
4. Launch Yggdrasil
:::tip Unified Launcher On Windows, use the unified launcher:
$ start_lilith.bat
This starts the Gateway (port 8000) and the Telegram Bot in a single console. :::
Or launch manually:
# Terminal 1 - Gateway
$ cd Asgard/lilith-orchestrator/gateway
$ uvicorn gateway:app --host 0.0.0.0 --port 8000
# Terminal 2 - Telegram Bot
$ cd Vanaheim/Bots_Lilith_v5/telegram
$ python bot.py
5. Verify
Send /start to your bot on Telegram. If you get a welcome message, everything is connected.
Running Tests
$ pytest
$ pytest -v # verbose
$ pytest -k memory # filter by keyword
Pre-commit hooks are configured for code formatting:
$ pre-commit install
$ pre-commit run --all-files
Troubleshooting
Gateway connection refused
Make sure the Gateway is running on port 8000 before starting the Telegram bot. Check firewall rules if running on a remote machine.
LM Studio not responding
Verify the local server is enabled in LM Studio and the model is loaded. The default URL is http://localhost:1234/v1.
Bot not replying
Check that TELEGRAM_BOT_TOKEN and TELEGRAM_OWNER_CHAT_ID are correct. The bot only responds to the owner.
Memory errors on import
sentence-transformers and PyTorch are heavy. Install with --no-deps if you already have them, or use a virtual environment.