According to the article you linked, that was debated even at the time:
> PC Magazine Editor in Chief Lance Ulanoff criticized the campaign's use of the term "PC" to refer specifically to IBM PC compatible, or Wintel, computers, noting that this usage, though common, is incorrect, as the Macintosh is also a personal computer.
But it's not the PC(the IBM 5150) or one of it's inheritor machines.
To be honest I agree with you, If it is intended as a personal computer it's a PC. Yes tablets and phones(Pocket Computer???) included. But I understand the argument that a PC is a specific architecture. it's a wrong argument, but I get it.
Ollama does not support Vulkan on any platform. So this at least provides another choice.
Being Windows-only is still baffling. I guess they assume their biggest user base is using Windows, and Linux users are few and don't care do much about running LLMs on iGPUs (the experience is poor). But would it really cost them that much work to support other OS?
Edit:
> GAIA_Installer.exe: For running agents on non-Ryzen AI PCs, this uses Ollama as the backend. (https://github.com/amd/gaia)
...eh, what's the point? Why don't I just install Ollama?
"Gaia uses the open-source Lemonade SDK from ONNX TurnkeyML for LLM inference."
It says it right in the article.
Its all Python academic-quality code, and I don't know why AMD isn't partnering with high performance inference engines like Llama.cpp (the project Ollama borrowed their code from, and then didn't keep up with) instead.
* Dual Mode: GAIA comes in two flavors:
* Hybrid Mode: Optimized for Ryzen AI PCs, combining AMD Neural Processing Unit (NPU) and Integrated Graphics Processing Unit (iGPU) for maximum performance
* Generic Mode: Compatible with any Windows PC, using Ollama as the backend
https://github.com/amd/gaia
"OS: Windows 11 Pro/Home (GAIA does not support macOS or Linux at this time, ..."
2020 was roughly when I noticed AMD hated developers.
Until AMD fixes that bug, NVidia will continue to outflank them at every point.
Ballmer had it right: "Developers, developers, developers."
looks like “on any PC” means “on any Windows PC”.
... also with specific new AMD hardware, it seems?
That is the original definition of PC.
https://en.m.wikipedia.org/wiki/Get_a_Mac
According to the article you linked, that was debated even at the time:
> PC Magazine Editor in Chief Lance Ulanoff criticized the campaign's use of the term "PC" to refer specifically to IBM PC compatible, or Wintel, computers, noting that this usage, though common, is incorrect, as the Macintosh is also a personal computer.
But it's not the PC(the IBM 5150) or one of it's inheritor machines.
To be honest I agree with you, If it is intended as a personal computer it's a PC. Yes tablets and phones(Pocket Computer???) included. But I understand the argument that a PC is a specific architecture. it's a wrong argument, but I get it.
To be clear, I don't have an opinion in this fight. I was just interested in the historical trivia. Thanks for the little history lesson!
PCs predate GUIs.
https://www.computerhistory.org/revolution/personal-computer...
Indeed, IBM PC certainly didn't had one and I had to wait until MS-DOS 5 to use one, that wasn't a game.
As someone that started on Timex 2068, I can certainly say that no one referred to CP/M, MSX, Atari, Amiga or Mac as being PC.
The word PC was definitely known as being IBM PC with either IBM PC-DOS or MS-DOS.
Windows only.
Dependencies: Miniconda :|
Related: https://news.ycombinator.com/item?id=42886680
Ollama does not support Vulkan on any platform. So this at least provides another choice.
Being Windows-only is still baffling. I guess they assume their biggest user base is using Windows, and Linux users are few and don't care do much about running LLMs on iGPUs (the experience is poor). But would it really cost them that much work to support other OS?
Edit:
> GAIA_Installer.exe: For running agents on non-Ryzen AI PCs, this uses Ollama as the backend. (https://github.com/amd/gaia)
...eh, what's the point? Why don't I just install Ollama?
Extensions to Khronos APIs to this day, tend to get born as DirectX features, from GPU vendors in collaboration with Microsoft, so no surprise there.
Also note that NVidia happens to have their own Linux distribution, while it isn't racing to support GNU/Linux in general.
>But would it really cost them that much work to support other OS?
Radeon group was never really excelling at writing software and drivers. Nvidia was always running rings around them. Bad habits are hard to reverse.
Looks like yet another wrapper for ollama…
"Gaia uses the open-source Lemonade SDK from ONNX TurnkeyML for LLM inference."
It says it right in the article.
Its all Python academic-quality code, and I don't know why AMD isn't partnering with high performance inference engines like Llama.cpp (the project Ollama borrowed their code from, and then didn't keep up with) instead.
With ollama being another wrapper of llamacpp...
(Gaia seems to have 2 modes, one for running on AMD-specific hardware, and the second one being a wrapper around ollama the llamacpp wrapper).