I built this because I used to be afraid to talk to people in
certain situations — job interviews, difficult conversations,
social situations I didn't know how to navigate. I kept wishing
I could simulate them first.
Took me a while to realise I could actually build that.
The interesting technical challenge was making agents feel
genuinely distinct rather than variations of the same helpful AI
voice. The solution was grounding each agent in real behavioral
research pulled at world-creation time, storing their full
identity in a plain markdown file, and giving them a specific
grievance — something eating at them before the scene even starts.
Happy to answer questions about the agent prompting approach,
the parallel asyncio loop, or anything else. Built from Malawi
on zero budget using free API tiers.
The "specific grievance" detail is what makes this interesting. Most multi-agent sims feel flat because agents are just goal-oriented — giving them a pre-existing tension before the scene starts is a much more realistic model of how real conversations actually work.
Curious how you handle grievance drift over a long session — does it fade as the agent "resolves" it, or does it stay fixed as a personality constant?
Right now grievances are fixed — they live in the agent's .md file and persist
as a personality constant throughout the session. update_mood() shifts emotional
state tick by tick, but it doesn't rewrite the underlying grievance.
Your question is actually pointing at the most interesting unsolved problem in
the project. A grievance resolution arc — where the .md file itself gets rewritten
mid-simulation as the agent "processes" something — would make long sessions feel
genuinely different from short ones. It's on the roadmap but I haven't built it yet.
The risk is it could make agents feel too therapeutic. Real people carry grievances
for years. Not sure the right answer yet.
I built this because I used to be afraid to talk to people in certain situations — job interviews, difficult conversations, social situations I didn't know how to navigate. I kept wishing I could simulate them first.
Took me a while to realise I could actually build that.
The interesting technical challenge was making agents feel genuinely distinct rather than variations of the same helpful AI voice. The solution was grounding each agent in real behavioral research pulled at world-creation time, storing their full identity in a plain markdown file, and giving them a specific grievance — something eating at them before the scene even starts.
Happy to answer questions about the agent prompting approach, the parallel asyncio loop, or anything else. Built from Malawi on zero budget using free API tiers.
The "specific grievance" detail is what makes this interesting. Most multi-agent sims feel flat because agents are just goal-oriented — giving them a pre-existing tension before the scene starts is a much more realistic model of how real conversations actually work. Curious how you handle grievance drift over a long session — does it fade as the agent "resolves" it, or does it stay fixed as a personality constant?
Right now grievances are fixed — they live in the agent's .md file and persist as a personality constant throughout the session. update_mood() shifts emotional state tick by tick, but it doesn't rewrite the underlying grievance.
Your question is actually pointing at the most interesting unsolved problem in the project. A grievance resolution arc — where the .md file itself gets rewritten mid-simulation as the agent "processes" something — would make long sessions feel genuinely different from short ones. It's on the roadmap but I haven't built it yet.
The risk is it could make agents feel too therapeutic. Real people carry grievances for years. Not sure the right answer yet.