I am the author of this article. I'm a non-coder (architect) based in Japan.
Over the past 11 months, I've been experimenting with Gemini 1.5 Pro to solve the "Context Dilution" problem (where the AI gets "drunk" and hallucinates in long contexts).
Instead of fine-tuning, I applied the cognitive model of Abhidhamma (Ancient Buddhist Psychology) to the system architecture.
The Architecture:
1.Super-Ego: System Instructions v1.5.0 (Logic Filter)
2.Ego: Gemini 1.5 Pro (Processor with limited active context)
3.Id: Vector DB (Deep storage of 800k+ tokens)
I wrote zero lines of code for this. I built it entirely through dialogue with the AI.
I've open-sourced the System Instructions on GitHub (link in the article).
I'd love to hear your feedback on this "Pseudo-Human" approach.
Hi HN,
I am the author of this article. I'm a non-coder (architect) based in Japan.
Over the past 11 months, I've been experimenting with Gemini 1.5 Pro to solve the "Context Dilution" problem (where the AI gets "drunk" and hallucinates in long contexts).
Instead of fine-tuning, I applied the cognitive model of Abhidhamma (Ancient Buddhist Psychology) to the system architecture.
The Architecture:
1.Super-Ego: System Instructions v1.5.0 (Logic Filter)
2.Ego: Gemini 1.5 Pro (Processor with limited active context)
3.Id: Vector DB (Deep storage of 800k+ tokens)
I wrote zero lines of code for this. I built it entirely through dialogue with the AI. I've open-sourced the System Instructions on GitHub (link in the article).
I'd love to hear your feedback on this "Pseudo-Human" approach.