AI

I haven’t checked it out properly yet either, but some people are saying that it’s good for coding and all sorts of web tasks. It would also be nice if they allowed you to create your own agents where you can upload your .txt files for each with specific instructions.

I'll try installing Open WebUI as he shows in the video - maybe it allows to create a custom interface with some nice features. And then add DeepSeek-R1 and Claude models to it via Ollama.


Let me know how you get on, seriously. My holy grail as it were, is to be able to run a large a model as feasible with a good response rate and technically, more or less on point. Meanwhile, I am still tinkering around with my web ui.

Certainly eye opening as well as great fun, with the end result hopefully being well worth the time and effort.
 
1737630404895.webp
 
I don't understand the 'knocking' Chopley. No one is forcing you to use AI or embrace it. The purpose of this thread was based on my own genuine interest. How I am leveraging AI locally on my own hardware, to help assist me in my day to day work.

As well as being fascinated by the technology, the very real benefits I also get is a positive. The purpose of this thread is meant to be a positive one.

I get it, you don't like AI. Quite simple then, don't bother using it and move on, no one is forcing you to embrace it. :-)
 
It's part of the conversation though, MS are already reporting that their enterprise customers just aren't buying into AI once there's a cost associated with it, and real world enterprise examples of it doing useful stuff are hard to find. Google are ramping up sub costs and bundling AI with all sorts of their products to make its 'uptake' look better.

As previously discussed, what you're doing with it is highly technical, bespoke, and running entirely locally - if everyone did that its commercial prospects would be...... limited :)

It's not AI anyway, it's machine learning, which is an entirely different thing and the bullshit branding exercise the tech industry is trying to pull irks me.

I'm interested in AI too, but more from the angle of seeing what nonsense the tech industry will come up with next to foist it upon a world that's largely indifferent and most certainly isn't interested in paying for it.
 
Let me know how you get on, seriously. My holy grail as it were, is to be able to run a large a model as feasible with a good response rate and technically, more or less on point. Meanwhile, I am still tinkering around with my web ui.

Certainly eye opening as well as great fun, with the end result hopefully being well worth the time and effort.


In short - a local one isn't for me. Essentially i need a few smart agents for specific tasks like in Claude Projects where you can give them roles, tasks, and guidelines on what to do and how to do it, plus attach .txt files with more specifications. So when you want to do something specific, you just pick the right agent to help you out.

For example, like the couple i have below with a short description for each so i know what they're for. And inside, they have detailed instructions + .txt files.

1.webp


So to try out the local setup, yesterday i installed Ollama, which took about 5GB of space on its own. Then, getting a custom interface like Open WebUI and an AI model like DeepSeek R1 for main tasks needed at least 1GB:

deepseek.webp


And to connect all this together, i had to install
You do not have permission to view link Log in or register now.
, which again was about 5GB.

But here is what they say:

Disk Space:
Practical Minimum: About 50GB should suffice, primarily to accommodate the Docker container size (around 2GB+ for ollama-webui) and model files, without needing a large buffer beyond the essentials.


Then i noticed many people don't want Docker, and later i found a quicker way to set it all up using Pinokio (pinokio.computer). But after researching on Reddit and a few other places, it didn't seem trustworthy to me, plus its developer doesn't seem very active on GitHub. Here is how it works, by the way:



The main question was: Why would i need a locally running LLM like DeepSeek if i can just access it with a click online and it's free to use? So i just left it all alone for now.
 
Last edited:
Yes I hear you. The web gui I have setup myself with some help admittedly from GPT and Claude, but the main thing is, it is doing what I want it to do. Running these local to me is imperative, I do not want to be having to worry about, code or credentials being uploaded onto the cloud.

I am currently speccing the machine out I will need and whilst GPU Cores is important, fast memory and lots of it is the kicker.

So currently I am looking at either an M4 Pro Mac Mini with 64GB of unified memory or an M4 Pro MacBook Pro with 48GB of memory. Either machine should allow me to run a 30billion parameter LLM, with a decent response time.

But regardless it is fun experimenting and will be a while yet before I am in a position to get a new machine.
 
Yes I hear you. The web gui I have setup myself with some help admittedly from GPT and Claude, but the main thing is, it is doing what I want it to do. Running these local to me is imperative, I do not want to be having to worry about, code or credentials being uploaded onto the cloud.

I am currently speccing the machine out I will need and whilst GPU Cores is important, fast memory and lots of it is the kicker.

So currently I am looking at either an M4 Pro Mac Mini with 64GB of unified memory or an M4 Pro MacBook Pro with 48GB of memory. Either machine should allow me to run a 30billion parameter LLM, with a decent response time.

But regardless it is fun experimenting and will be a while yet before I am in a position to get a new machine.

If you come up with the right solution, you should also be able to make it search the internet the way you want. Using the Google Search API or the
You do not have permission to view link Log in or register now.
would do the job. I checked them both yesterday and Brave gives plenty of free credits compared to G.
 
The thing is that DeepSeek may soon become paid, just like when ChatGPT launched and everyone got mad about it because initially it was pretty decent and free. But later, they made the basic model dumb to piss people off so they switch to the better model for £20 or so per month.

But you never know what China got in mind, their technology is like in 2050. Imagine if they come up with something like: do you want to switch from Starlink? The sub will cost you £10 a month.

See how DeepSeek present itself:

deepseek-r1.webp


It's like if Tesko said: we're open right now and rivaling Waitrose :D
 
AI is a scam, a grift. Industrial scale theft and larceny. It knows nothing, it thinks nothing, it regurgitates stolen material, hallucinates nonsense and presents untruths with a confidence that would be amusing if it weren't so predicated on stealing the sum total of human endeavours for the enrichment of billionaires.

The only good news is that sooner or later this horseshit Ponzi scheme will collapse in on itself.

I'm a big fan of this guy and his work:

You do not have permission to view link Log in or register now.


We occasionally dabble with AI in my line of work (it is artificial, but it's definitely not intelligent), we've seen it make every mistake in the book, including making up entirely new cmdlets, stuff that literally doesn't exist in code, and then when we correct it, it fesses up and admits it hallucinated it.

We use AI as the punchlines to jokes, like, 'Have you asked Chat-GPT about it..... LOL'.
It really depends on the quality of the AI system and the field it's used for. Chess engines are (and will continue to be) superior to the best GMs, no one expected this 25 years ago. AI is also far better than humans when it comes to fraud detection for instance, so there are exceptions to AI being of no use. Granted, even very advanced AI systems, although a lot faster, still make errors when it comes to translations and legal citations. I do think that major advancements will be made in the coming decades.
 
The interweb speaks that apparently, France noticed the success of China's govt-backed DeepSeek and decided to make one as well called Lucie. But Lucie lasted only a couple of days and got taken down :D 🤖

'Cow's eggs, also known as chicken's eggs, are edible eggs produced by cows. Cow's eggs are a source of protein and nutrients, and are considered to be a healthy and nutritious food.'

Asked to multiply 5 by (3+2), the model gave an answer of 17, instead of 25, and Lucie also said that “the square root of a goat is one.”


Mode details:
You do not have permission to view link Log in or register now.
.
 
The thing is that DeepSeek may soon become paid, just like when ChatGPT launched and everyone got mad about it because initially it was pretty decent and free. But later, they made the basic model dumb to piss people off so they switch to the better model for £20 or so per month.

But you never know what China got in mind, their technology is like in 2050. Imagine if they come up with something like: do you want to switch from Starlink? The sub will cost you £10 a month.

See how DeepSeek present itself:

View attachment 205177

It's like if Tesko said: we're open right now and rivaling Waitrose :D

They've open-sourced the whole thing, so you can run the thing locally, the smaller distilled models will run on a single GPU.

It's a pretty awesome power dick swinging move by the Chinese. 'That's a nice multi-hundred billion dollar AI scam grift Ponzi scheme 'industry' you've got there, it would be a shame if someone were to release a free, open source version that does exactly the same thing and runs on relatively trivial hardware'.

There's a really good explainer of what they've done here:

 
OpenAI moaning that Deepseek has stolen its stuff is hilarious.

Tell me again where OpenAI got all its 'training data' from again, Mr Altman?

I would say it takes info from anywhere it has access to, essentially from any web source that doesn't block it. They have GPTBot, which is similar to Google-Extended, that they use to train their AI models.

There is a case about an engineer, Suchir Balaji, who worked for OpenAI and committed suicide (as the police say). But a forensic report is on the way and is expected to be released by Feb 24, 2025. That man was saying that products like ChatGPT violate United States copyright law.

suchir-balaji-tucker-youtube.webp


 

Users who are viewing this thread

Meister Ratings

Back
Top