Yes, I do. It is quite helpful. For main coding tasks I use Claude, but if my credits vanish or I have enough time to get an answer, I use Ollama extensively. I would recommend for developers to maintain their own AI pipeline as well.
For anything dealing with personal data, like browser inputs, I would exclusively use local models too. Probably still niche, but non-local AI would be a deal breaker for me for both browsers and OS.
I am running but it is only useful for very easy conference tasks, or either it needs a very high computing power. Currently I was running on a 32GB Mac Studio M1, and I mostly use it for generating commit messages.
Yes, I do. It is quite helpful. For main coding tasks I use Claude, but if my credits vanish or I have enough time to get an answer, I use Ollama extensively. I would recommend for developers to maintain their own AI pipeline as well.
For anything dealing with personal data, like browser inputs, I would exclusively use local models too. Probably still niche, but non-local AI would be a deal breaker for me for both browsers and OS.
I'm curious what the workflow actually looks like for people running Ollama day-to-day.
Do you mostly use it through the terminal, a UI like Open WebUI, or via integrations with other tools?
I’m trying to understand where a browser integration would actually fit - if at all
I am running but it is only useful for very easy conference tasks, or either it needs a very high computing power. Currently I was running on a 32GB Mac Studio M1, and I mostly use it for generating commit messages.