- Joined
- Mar 31, 2005
So no doubt everyone here has heard about the growth of AI over the past 18 months or so, unless you have been living under a rock and disconnected from society and the internet during this time.
Well for me to assist with my work I used to have a monthly subscription to Claude.ai and also Midjourney for art work and suno for music. Part of my new year resolutions is to cull needless subscriptions. So the AI ones which mount up added together were and are prime targets.
Now to run AI models also known as LLM's ( Large Language Model ) you do need a fairly decent machine and fast memory, lots of fast memory is very desirable, should you want them to be responsive.
Of course running an LLM locally on your own machine is not going to be as fast as the likes of ChatGPT et al, which have vast cloud resources at their disposal. But you can get some pretty favourable results.
So what have I done?
I have setup Ollama on my M2 Pro Macbook Pro and have created a webgui to allow me to choose which LLM to run, so I get the full ChatGPT effect. My plan being to upgrade my machine at some stage this year which will allow me to run 30billion parameter models.
A low down to what this means, let me let HAL 9000 ( The name I have given my own AI chatbot ) explain
So this is my AI chatbot, the web interface I am continuing to work on. For replacing my sub to Midjourney, I have setup and configured Stable Diffusion, which is absolutely different class.
Below is the very first image I created with it
This was meant to be an Ancient Egyptian. LOL
But finally have come up with some superb images. I borrowed the prompts for Gandalf and Darth Vader, so cannot take sole credit for them. But these were generated on my Mac. The power of Apple Silicon is there for all to see
Well for me to assist with my work I used to have a monthly subscription to Claude.ai and also Midjourney for art work and suno for music. Part of my new year resolutions is to cull needless subscriptions. So the AI ones which mount up added together were and are prime targets.
Now to run AI models also known as LLM's ( Large Language Model ) you do need a fairly decent machine and fast memory, lots of fast memory is very desirable, should you want them to be responsive.
Of course running an LLM locally on your own machine is not going to be as fast as the likes of ChatGPT et al, which have vast cloud resources at their disposal. But you can get some pretty favourable results.
So what have I done?
I have setup Ollama on my M2 Pro Macbook Pro and have created a webgui to allow me to choose which LLM to run, so I get the full ChatGPT effect. My plan being to upgrade my machine at some stage this year which will allow me to run 30billion parameter models.
A low down to what this means, let me let HAL 9000 ( The name I have given my own AI chatbot ) explain

So this is my AI chatbot, the web interface I am continuing to work on. For replacing my sub to Midjourney, I have setup and configured Stable Diffusion, which is absolutely different class.
Below is the very first image I created with it
This was meant to be an Ancient Egyptian. LOL
But finally have come up with some superb images. I borrowed the prompts for Gandalf and Darth Vader, so cannot take sole credit for them. But these were generated on my Mac. The power of Apple Silicon is there for all to see

Last edited: