Is there some documentation for this? The code is probably the simplest (Not So) Large Language Model implementation possible, but it is not straight forward to understand for developers not familiar with multi-head attention, ReLU FFN, LayerNorm and learned positional embeddings.
This projects shares similarities with Minix. Minix is still used at universities as an educational tool for teaching operating system design. Minix is the operating system that taught Linus Torvalds how to design (monolithic) operating systems. Similarly having students adding capabilities to GuppyLM is a good way to learn LLM design.
Absolutely. If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.
> there is no more need for writing high level docs?
Absolutely not. That would be like exploring a cave without a flashlight, knowing that you could just feel your way around in the dark instead.
Code is not always self-documenting, and can often tell you how it was written, but not why.
> If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.
My non-coder but technically savvy boss has been doing this lately to great success. It's nice because I spend less time on it since the model has taken my place for the most part.
There are so many blogs and tutorials about this stuff in particular, I wouldn't worry about it being outside the training data distribution for modern LLMs. If you have a scarce topic in some obscure language I'd be more careful when learning from LLMs.
Even cool projects can learn from others. Maybe they missed something that could benefit the project, or made some interesting technical choice that gives a different result.
For the readers/learners, it's useful to understand the differences so we know what details matter, and which are just stylistic choices.
But it isn't the OP's responsibility to compare their project to all other projects. The GP could themselves perform the comparison and post their thoughts instead of asking an open ended question.
It isn't, but such information will be immensely helpful to anyone who wants to learn from such projects. Some tutorials are objectively better than others, and learners can benefit from such information.
Well, the person who asked the question, for one. I'm sure they're not the only one. Best not to assume why people are asking though, so you can save time by not writing irrelevant comments.
There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.
Cool project. I'm working on something where multiple LLM agents share a world and interact with each other autonomously. One thing that surprised me is how much the "world" matters — same model, same prompt, but put it in a system with resource constraints, other agents, and persistent memory, the behavior changes dramatically. Made me realize we spend too much time optimizing the model and not enough thinking about the environment it operates in.
Meaning/goal of life is to reproduce. Food (and everything else) is only a means to it. Reproduction is the only root goal given by nature to any life form. All resources and qualities are provided are only to help mating.
I'd argue genes nor life has a "goal". They are what they are because they've been successful at continuing their existence. Would you say a rock's goal is not to get broken?
Only because genes/organisms can make choices (changes to its programming, or decisions) to optimize their path towards their goal.
A rock is maybe not a good counterexample, but a crystal is because it can grow over time. So in some sense, it tries not to break. However a crystal cannot make any choices; it's behavior is locked into the chemistry it starts with.
This is a nice idea. A tiny implementation can be way more useful for learning than yet another wrapper around a big model, especially if it keeps the training loop and inference path small enough to read end to end.
Could it be possible to train LLM only through the chat messages without any other data or input?
If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.
Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.
What happens during chat is just inference. The weights are frozen, and it generates tokens conditioned on the conversation so far. No learning happens. The "learning during conversation" effect you see in bigger models is in-context learning: the model uses the full chat history in its attention window, but nothing persists after the session ends.
At 9M params you won't get meaningful in-context learning either. That capability seems to emerge around 1B+ params, and it has more of a phase-transition quality than a smooth ramp. So unfortunately no, you can't teach Guppy regex by talking to it.
There is some research on "test-time training" where weights actually get updated during inference, but it's expensive and niche. Backprop costs roughly 3x the compute of a forward pass, so doing it live in a conversation is impractical for anything but tiny models.
Wow that is such a cool idea! And honestly very much needed. LLMs seem to be this blackbox nobody understands. So I love every effort to make that whole thing less mysterious. I will definitely have a look at dabbling with this, may it not be a goldfish LLM :)
I am trying to find how the synthetic data was created (looking through the repo) and didn't find it. Maybe I am missing it - Would love to see the prompts and process on that aspect of the training data generation!
Does this work by just training once with next token prediction? Want to understand better how it creates fluent sentences if anyone can provide insights.
This is a direct output from the synthetic training data though - wonder if there is a bit of overfitting going on or it’s just a natural limitation of a much smaller model.
You’re absolutely right! HN isn’t just LLM-infested hellscape, it’s a completely new paradigm of machine assisted chocolate-infused information generation.
My initial idea was to train a navigation decision model with 25M parameters for a Raspberry Pi, which, in testing, was getting about 60% of tool calls correct. IMO, it seems like around 20M parameters would be a good size for following some narrow & basic language instructions.
Ok. This makes me wonder about a broader question. Is there a scientific approach showing a pyramid of cognitive functions, and how many parameters are (minimally) required for each layer in this pyramid?
I don't mean to be 'that guy', but after a quick review, this really feels like low-effort AI slop to me.
There is nothing wrong using AI tools to write code, but nothing here seems to have taken more than a generic 'write me a small LLM in PyTorch' prompt, or any specific human understanding.
The bar for what constitutes an engineering feat on HN seems to have shifted significantly.
I love these kinds of educational implementations.
I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple
Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.
> the user is immediately able to understand the constraints
Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.
IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.
In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."
Or, in this case, that it really enjoys food-pellets.
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.
I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.
Its like when people say, if there are aliens they would find the same mathematical constants thet we do
I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)
Hence why it’s a “unintentional nod” not an instantiation
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/user/gupik/guppylm/guppylm/__main__.py", line 48, in <module>
main()
File "/home/user/gupik/guppylm/guppylm/__main__.py", line 29, in main
engine = GuppyInference("checkpoints/best_model.pt", "data/tokenizer.json")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/gupik/guppylm/guppylm/inference.py", line 17, in __init__
self.tokenizer = Tokenizer.from_file(tokenizer_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: No such file or directory (os error 2)
```
I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.
How much training data did you end up needing for the fish personality to feel coherent? Curious what the minimum viable dataset looks like for something like this.
Great work! I still think that [1] does a better job of helping us understand how GPT and LLM work, but yours is funnier.
Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).
So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.
Is there some documentation for this? The code is probably the simplest (Not So) Large Language Model implementation possible, but it is not straight forward to understand for developers not familiar with multi-head attention, ReLU FFN, LayerNorm and learned positional embeddings.
This projects shares similarities with Minix. Minix is still used at universities as an educational tool for teaching operating system design. Minix is the operating system that taught Linus Torvalds how to design (monolithic) operating systems. Similarly having students adding capabilities to GuppyLM is a good way to learn LLM design.
give the code to an LLM and have a discussion about it.
does this work? there is no more need for writing high level docs?
> does this work?
Absolutely. If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.
> there is no more need for writing high level docs?
Absolutely not. That would be like exploring a cave without a flashlight, knowing that you could just feel your way around in the dark instead.
Code is not always self-documenting, and can often tell you how it was written, but not why.
> If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.
My non-coder but technically savvy boss has been doing this lately to great success. It's nice because I spend less time on it since the model has taken my place for the most part.
There are so many blogs and tutorials about this stuff in particular, I wouldn't worry about it being outside the training data distribution for modern LLMs. If you have a scarce topic in some obscure language I'd be more careful when learning from LLMs.
LLMs can tell you what the code does but not why the developer chose to do it that way.
Also, large codebases are harder to understand. But projects like these are simple to discuss with an LLM.
> LLMs can tell you what the code does but not why the developer chose to do it that way.
Do LLMs not take comments into consideration? (Serious question - I'm just getting into this stuff)
How does this compare to Andrej Karpathy's microgpt (https://karpathy.github.io/2026/02/12/microgpt/) or minGPT (https://github.com/karpathy/minGPT)?
I haven't compared it with anything yet. Thanks for the suggestion; I'll look into these.
Who cares how it compares, it's not a product it's a cool project
Even cool projects can learn from others. Maybe they missed something that could benefit the project, or made some interesting technical choice that gives a different result.
For the readers/learners, it's useful to understand the differences so we know what details matter, and which are just stylistic choices.
This isn't art; it's science & engineering.
But it isn't the OP's responsibility to compare their project to all other projects. The GP could themselves perform the comparison and post their thoughts instead of asking an open ended question.
> it isn't the OP's responsibility to compare their project to all other projects
No one, including the GP, said it was.
It isn't, but such information will be immensely helpful to anyone who wants to learn from such projects. Some tutorials are objectively better than others, and learners can benefit from such information.
100% agree, I didn't mean to imply that OP is responsible for that, or that the (lack of) comparison detracts in any way from the work.
> Who cares how it compares
Well, the person who asked the question, for one. I'm sure they're not the only one. Best not to assume why people are asking though, so you can save time by not writing irrelevant comments.
Microgpt isn’t a product either. Are you saying that differences between cool projects aren’t worth thinking and conversing about?
https://bbycroft.net/llm has 3d Visualization of tiny example LLM layers that do a very good job at showing what is going on (https://news.ycombinator.com/item?id=38505211)
Pretty neat! I'll definitely take a deeper look into this.
have little to do with this, but i have to say your project are indeed pretty cool! Consider adding some more UI?
Neat!
It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton
This really makes me think if it would be feasible to make an llm trained exclusively on toki pona (https://en.wikipedia.org/wiki/Toki_Pona)
There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.
I think you could probably feed a copy of a toki pona grammar book to a big model, and have it produce ‘infinite’ training data
There are not enough samples in that book to generate new "infinite" data.
Cool project. I'm working on something where multiple LLM agents share a world and interact with each other autonomously. One thing that surprised me is how much the "world" matters — same model, same prompt, but put it in a system with resource constraints, other agents, and persistent memory, the behavior changes dramatically. Made me realize we spend too much time optimizing the model and not enough thinking about the environment it operates in.
This is probably a consequence of the training data being fully lowercase:
You> hello Guppy> hi. did you bring micro pellets.
You> HELLO Guppy> i don't know what it means but it's mine.
Great find! It appears uppercase tokens are completely unknonw to the tokenizer.
But the character still comes through in response :)
Finally an LLM that's honest about its world model. "The meaning of life is food" is arguably less wrong than what you get from models 10,000x larger
It's arguably even better than the most famous answer to that question.
which is?
https://medium.com/change-your-mind/the-meaning-of-life-is-4...
Meaning/goal of life is to reproduce. Food (and everything else) is only a means to it. Reproduction is the only root goal given by nature to any life form. All resources and qualities are provided are only to help mating.
Reproduction is the goal of genes.
Food (not dying) is the goal of organisms.
I'd argue genes nor life has a "goal". They are what they are because they've been successful at continuing their existence. Would you say a rock's goal is not to get broken?
Only because genes/organisms can make choices (changes to its programming, or decisions) to optimize their path towards their goal.
A rock is maybe not a good counterexample, but a crystal is because it can grow over time. So in some sense, it tries not to break. However a crystal cannot make any choices; it's behavior is locked into the chemistry it starts with.
Then why are reproductive rates so low in western countries?
https://en.wikipedia.org/wiki/List_of_countries_by_total_fer...
The western lifestyle is an evolutionary dead end?
It seems that some in the West want it to be and are working hard to make it so.
No, evolution has encoded lust. It has not yet allowed for condoms. But it's a process.
Nice work and thanks for sharing it!
Now, I ask, have LLMs ben demystified to you? :D
I am still impressed how much (for the most part) trivial statistics and a lot of compute can do.
This is such a smart way to demystify LLMs. I really like that GuppyLM makes the whole pipeline feel approachable..great work
This is a nice idea. A tiny implementation can be way more useful for learning than yet another wrapper around a big model, especially if it keeps the training loop and inference path small enough to read end to end.
I like the idea, just that the examples are reproduced from the training data set.
How does it handle unknown queries?
It mostly doesn't, at 9M it has very limited capacity. The whole idea of this project is to demonstrate how Language Models work.
Could it be possible to train LLM only through the chat messages without any other data or input?
If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.
Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.
What happens during chat is just inference. The weights are frozen, and it generates tokens conditioned on the conversation so far. No learning happens. The "learning during conversation" effect you see in bigger models is in-context learning: the model uses the full chat history in its attention window, but nothing persists after the session ends.
At 9M params you won't get meaningful in-context learning either. That capability seems to emerge around 1B+ params, and it has more of a phase-transition quality than a smooth ramp. So unfortunately no, you can't teach Guppy regex by talking to it.
There is some research on "test-time training" where weights actually get updated during inference, but it's expensive and niche. Backprop costs roughly 3x the compute of a forward pass, so doing it live in a conversation is impractical for anything but tiny models.
What does "done offline" mean? Otherwise you are limited by context window.
Wow that is such a cool idea! And honestly very much needed. LLMs seem to be this blackbox nobody understands. So I love every effort to make that whole thing less mysterious. I will definitely have a look at dabbling with this, may it not be a goldfish LLM :)
I am trying to find how the synthetic data was created (looking through the repo) and didn't find it. Maybe I am missing it - Would love to see the prompts and process on that aspect of the training data generation!
It's here:
https://github.com/arman-bd/guppylm/blob/main/guppylm/genera...
Uses a sort of mad-libs templatized style to generate all the permutations.
Building it yourself is always the best test if you really understand how it works.
Does this work by just training once with next token prediction? Want to understand better how it creates fluent sentences if anyone can provide insights.
> you're my favorite big shape. my mouth are happy when you're here.
Laughed loudly :-D
This is a direct output from the synthetic training data though - wonder if there is a bit of overfitting going on or it’s just a natural limitation of a much smaller model.
Forked. Very cool. I appreciate the simplicity and documentation.
I love this! Seems like it can't understand uppercase letters though
Uppercase letters were intentionally ignored.
This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)
Why are there so many dead comments from new accounts?
Because despite what HN users seem to think, HN is a LLM-infested hellscape to the same degree as Reddit, if not more.
You’re absolutely right! HN isn’t just LLM-infested hellscape, it’s a completely new paradigm of machine assisted chocolate-infused information generation.
Just let me know which type of information goo you'd like me to generate, and I'll tailor the perfect one for you.
But what should we do? The parent company isn't transparent about communicating the seriousness of this problem
It really seems it's mostly AI comments on this. Maybe this topic is attractive to all the bots.
They all seem to be slop comments.
Thanks. Tinkering is how I learn and this is what I’ve been looking for.
Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.
I was going to suggest implementing RoPE to fix the context limit, but realized that would make it anatomically incorrect.
I intentionally removed all optimizations to keep it vanilla.
This is amazing work. Thank you.
how did you generate the synthetic data?
> A 9M model can't conditionally follow instructions
How many parameters would you need for that?
My initial idea was to train a navigation decision model with 25M parameters for a Raspberry Pi, which, in testing, was getting about 60% of tool calls correct. IMO, it seems like around 20M parameters would be a good size for following some narrow & basic language instructions.
Ok. This makes me wonder about a broader question. Is there a scientific approach showing a pyramid of cognitive functions, and how many parameters are (minimally) required for each layer in this pyramid?
Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)
OMG! Why didn't I thought fo this first :P
how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params
This is really great! I've been wanting to do something similar for a while.
I... wow, you made an LLM that can actually tell jokes?
With 9M params it just repeats the joke from a training dataset.
Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)
I don't mean to be 'that guy', but after a quick review, this really feels like low-effort AI slop to me.
There is nothing wrong using AI tools to write code, but nothing here seems to have taken more than a generic 'write me a small LLM in PyTorch' prompt, or any specific human understanding.
The bar for what constitutes an engineering feat on HN seems to have shifted significantly.
I could fork it and create TrumpLM. Not a big leap, I suppose.
probably 8M params are too much even :)
As long as you use the best parameters then it doesn't matter
Grab her by the pointer.
Great and simple way to bridge the gap between LLMs and users coming in to the field!
Love it! Great idea for the dataset.
Is this a reference from the Bobiverse?
Haha, funny name :)
Adorable! Maybe a personality that speaks in emojis?
OMG! You just gave me the next idea..
I love these kinds of educational implementations.
I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple
Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.
> the user is immediately able to understand the constraints
Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.
[1] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf
IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.
In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."
Or, in this case, that it really enjoys food-pellets.
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
What does 'precisely' mean? Everyone has the same understanding of events - a precise one?
No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.
I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.
Its like when people say, if there are aliens they would find the same mathematical constants thet we do
Different argument
I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)
Hence why it’s a “unintentional nod” not an instantiation
Tiny LLM is an oxymoron, just sayin.
How about: LLMs are on a spectrum and this one is on the tiny side?
True, but most would ignore LM if it weren't LLM.
* How creating dataset? I download it but it is commpresed in binary format.
* How training. In cloud or in my own dev
* How creating a gguf
You sound like Guppy. Nice touch.
``` uv run python -m guppylm chat
Traceback (most recent call last):
Exception: No such file or directory (os error 2) ```meybe add training again (read best od fine) and train again
``` # after config device checkpoint_path = "checkpoints/best_model.pt"
ckpt = torch.load(checkpoint_path, map_location=device, weights_only=False)
model = GuppyLM(mc).to(device) if "model_state_dict" in ckpt: model.load_state_dict(ckpt["model_state_dict"]) else: model.load_state_dict(ckpt)
start_step = ckpt.get("step", 0) print(f"Encore {start_step}") ```
Neat!
Cool
Did something similar last year https://github.com/aditya699/EduMOE
I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.
How much training data did you end up needing for the fish personality to feel coherent? Curious what the minimum viable dataset looks like for something like this.
Great work! I still think that [1] does a better job of helping us understand how GPT and LLM work, but yours is funnier.
Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).
So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.
[1] https://spreadsheets-are-all-you-need.ai/ [2] https://github.com/rasbt/LLMs-from-scratch
this comment seems to be astroturfing to sell a course