(In terms of intelligence, they tend to score similarly to a dense model that's as big as the geometric mean of the full model size and the active parameters, i.e. for GPT-OSS-20B, it's roughly as smart as a sqrt(20b*3.6b) ≈ 8.5b dense model, but produces tokens 2x faster.)
A few lessons learned:
1. small models like the new qwen3.5:9b can be fantastic for local tool use, information extraction, and many other embedded applications.
2. For coding tools, just use Google Antigravity and gemini-cli, or, Anthropic Claude, or...
Now to be clear, I have spent perhaps 100 hours in the last year configuring local models for coding using Emacs, Claude Code (configured for local), etc. However, I am retired and this time was a lot of fun for me: lot's of efforts trying to maximize local only results. I don't recommend it for others.
I do recommend getting very good at using embedded local models in small practical applications. Sweet spot.
What's also new here, is VRAM-context size trade-off: for 25% of it's attention network, they use the regular KV cache for global coherency, but for 75% they use a new KV cache with linear(!!!!) memory-token-context size expansion! which means, eg ~100K token -> 1.5gb VRAM use -meaning for the first time you can do extremely long conversations / document processing with eg a 3060.
Strong, strong recommend.
I'd also be curious to see if people have started doing censorship analysis of various models, like Qwen differing Tiananmen square to government documments while Llama straights up answers the question.
I like the idea of finding practical uses for it, but so far haven't managed to be creative enough. I'm so accustomed to using these things for programming.
So far I've got it orchestrating a few instances to dig through logs, local emails, git repositories, and github to figure out what I've been doing and what I need to do. Opus is waayyy better at it, but Qwen does a good enough job to actually be useful.
I tried having it parse orders in emails and create a CSV of expenses, and that went pretty badly. I'm not sure why. The CSV was invalid and full of bunk entries by the end, almost every time. It missed a lot of expenses. It would parse out only 5 or 6 items of 7, for example. Opus and Sonnet do spectacular jobs on tasks like this, and do cool things like create lists of emails with orders then systematically ensure each line item within each email is accounted for, even without prompting to do so. It's an entirely different category of performance.
Automation is something I'd like to dabble in next, but all I can think of it being useful for is mapping commands (probably from voice) to tool calls, and the reality is I'd rather tap a button on my phone. My family might like being able to use voice commands, though. Otherwise, having it parse logs to determine how to act based on thresholds or something would also be far better implemented with simple algorithms. It's hard to find truly useful and clear fits for LLMs
Actually pg's original "A plan for spam" explains how to do this with a Bayesian classifier.
I asked him why he didn't just have the LLM build him a python ML library based classifier instead.
The LLMs are great but you can also build supporting tools so that:
- you use fewer tokens
- it's deterministic
- you as the human can also use the tools
- it's faster b/c the LLM isn't "shamboozling" every time you need to do the same task.
Totally different categories and different use cases, but the more I learn about LLMs the more I discover there's a powerful, determinsitic, well-established statistical model or two to do the same thing.
Really, LLMs are kind of like convenient, wildly inefficient proxies for useful processes. But I'm not convinced they should often end up as permanent fixtures of logical pipelines. Unless you're making a chat bot, I guess.
Example: “what is the air speed velocity of a swallow?” - qwen knew it was a Monty Python gag, but couldnt and didnt figure out which one.
the gag is giving in detail which one
https://www.reddit.com/r/LocalLLaMA/comments/1ro7xve/qwen35_...
"Thinking" is just a term to describe a process in generative AI where you generate additional tokens in a manner similar to thinking a problem through. It's kind of a tired point to argue against the verb since it's meaning is well understood at this point
Using "thinking", "feeling", "alive", or otherwise referring to a current generation LLM as a creature is a mistake which encourages being wrong in further thinking about them.
There is this ancient story where man was created to mine gold in SA. There was some disagreement whether or not to delete the creatures afterwards. The jury is still out on what the point is.
Consulting our feelings seems good, the feelings were trained on millions of years worth of interactions. Non of them were this tho.
What would be the point for you of uhh robotmancipation?
Edit: for me it would get complicated if it starts screaming and begging not to be deleted. Which I know makes no sense.
Words such as nice, terrific, awful, manufacture, naughty, decimate, artificial, bully... and on and on.
Should one study words to relive extremism? Or should one study words to relieve extremism?
To a doctor of linguistics: "Dr, my extremism... What should I do about it - with words?!? Please help."
That is the question.
Does the doctor answer thusly: "Study the words to relive the extremism! There is your answer!" says he.
or does he say: "Study the words to relieve and soothe the painful, abrasive extremism. Do it twice daily, before meals."
Sage advice in either case methinks.
[0]: https://en.wikipedia.org/wiki/Prompt_engineering#Chain-of-th...
Nice! Me too.
> which is to say a pedantic extremist
Uh never mind, we are not the same lol.
Would you feel comfortable digitally torturing it? Giving it a persona and telling it terrible things? Acts of violence against its persona?
I’m not confident it’s not “feeling” in a way.
Yes its circuitry is ones and zeros, we understand the mechanics. But at some point, there’s mechanics and meat circuitry behind our thoughts and feelings too.
It is hubris to confidently state that this is not a form of consciousness.
Multiplying large matrices over and over is very much towards the "rock" end of that scale.
If one day we are able to create a philosopher from such a rudimentary machine (and a lot of tape), would you consider that very much towards the "rock" end as well?
If a Turing machine can truly simulate a full nondeterministic system as complex as a philosopher but it would take dedicating every gram of matter in the visible universe for a trillion years to simulate one second, is this meaningfully different than saying it cannot?
I suggest the answer to both questions are no, but the second one makes the answer at worst "practically, no".
My feeling is that consciousness is a phenomenon deeply connected to quantum mechanics and thus evades simulation or recreation on Turing machines.
It was designed to make it hard to argue that the answers to your questions are "no".
Of course there are caveats where the Turing machine model might not have a direct map onto human brains, but it seems the onus would be for one to explain why, for example, non-determinism is essential for a philosopher to work.
That said,
> Can a Turing machine of any sort truly indistinguishably simulate a nondeterministic system?
Given how AI has improved in its ability to impersonate human beings in recent years, I don't see why not. At least, the current trend does not seem to be in your favor.
I can see why you think the answer is "no". My understanding is that QM per se is mostly a distraction, but some principles underlying QM (some subjectivity thing) might be relevant here.
My best guess is that the AI tech will eventually be able to replicate a philosopher to arbitrary "accuracy", but there will always be an indescribable "residue" where one could still somehow detect that it is not a real human. I suspect this "residue" is not explainable using materialistic mechanisms though.
In classical computing, we design chips to avoid the quantum behavior, but there's nothing in theory to prevent us from building an equivalent quantum Turing machine using "rocks".
Everything worked fine on GPT but Qwen as often as not preferred to pretend to call a tool and not actually call it. After much aggravation I wound up just setting my bot / llama swap to use gpt for chat and only load up qwen when someone posts an image and just process / respond to the image with qwen and pop back over to gpt when the next chat comes in.
If you want a general knowledge model for answering questions or a coding agent, nothing you can run on your MacBook will come close to the frontier models. It's going to be frustrating if you try to use local models that way. But there are a lot of useful applications for local-sized models when it comes to interpreting and transforming unstructured data.
Azure Doc Intelligence charges $1.50 for 1000 pages. Was that an annual/recurring license?
Would you mind sharing your OCR model? I'm using Azure for now, as I want to focus on building the functionality first, but would later opt for a local model.
Try just GLM-OCR if you want to get started quickly. It has good layout recognition quality, good text recognition quality, and they actually tested it on Apple Silicon laptops. It works easily out-of-the-box without the yak shaving I encountered with some other models. Chandra is even more accurate on text but its layout bounding boxes are worse and it runs very slowly unless you can set up batched inference with vLLM on CUDA. (I tried to get batching to run with vllm-mlx so it could work entirely on macOS, but a day spent shaving the yak with Claude Opus's help went nowhere.)
If you just want to transcribe documents, you can also try end-to-end models like olmOCR 2. I need pipeline models that expose inner details of document layout because I need to segment and restructure page contents for further processing. The end-to-end models just "magically" turn page scans into complete Markdown or HTML documents, which is more convenient for some uses but not mine.
Trying to run LLM models somehow makes 128 GiB of memory feel incredibly tight. I'm frequently getting OOMs when I'm running models that are pushing the limits of what this can fit, I need to leave more memory free for system memory than I was expecting. I was expecting to be able to run models of up to ~100 GiB quantized, leaving 28 GiB for system memory, but it turns out I need to leave more room for context and overhead. ~80 GiB quantized seems like a better max limit when trying not running on a headless system so I'm running a desktop environment, browser, IDE, compilers, etc in addition to the model.
And memory bandwidth limitations for running the models is real! 10B active parameters at 4-6 bit quants feels usable but slow, much more than that and it really starts to feel sluggish.
So this can fit models like Qwen3.5-122B-A10B but it's not the speediest and I had to use a smaller quant than expected. Qwen3-Coder-Next (80B/3B active) feels quite on speed, though not quite as smart. Still trying out models, Nemotron-3-Super-120B-A12B just came out, but looks like it'll be a bit slower than Qwen3.5 while not offering up any more performance, though I do really like that they have been transparent in releasing most of its training data.
There's also the Ryzen AI Max+ 395 that has 128GB unified in laptop form factor.
Only Apple has the unique dynamic allocation though.
I'm not sure what exactly you're referring to with "Only Apple has the unique dynamic allocation though." On Strix Halo you set the fixed VRAM size to 512 MB in the BIOS, and you set a few Linux kernel params that enable dynamic allocation to whatever limit you want (I'm using 110 GB max at the moment). LLMs can use up to that much when loaded, but it's shared fully dynamically with regular RAM and is instantly available for regular system use when you unload the LLM.
I configured/disabled RGB lighting in Windows before wiping and the settings carried over to Linux. On Arch, install & enable power-profiles-daemon and you can switch between quiet/balanced/performance fan & TDP profiles. It uses the same profiles & fan curves as the options in Asus's Windows software. KDE has native integration for this in the GUI in the battery menu. You don't need to install asus-linux or rog-control-center.
For local AI: set VRAM size to 512 MB in the BIOS, add these kernel params:
ttm.pages_limit=31457280 ttm.page_pool_size=31457280 amd_iommu=off
Pages are 4 KiB each, so 120 GiB = 120 x 1024^3 / 4096 = 31457280
To check that it worked: sudo dmesg | grep "amdgpu.*memory" will report two values. VRAM is what's set in BIOS (minimum static allocation). GTT is the maximum dynamic quota. The default is 48 GB of GTT. So if you're running small models you actually don't even need to do anything, it'll just work out of the box.
LM Studio worked out of the box with no setup, just download the appimage and run it. For Ollama you just `pacman -S ollama-rocm` and `systemctl enable --now ollama`, then it works. I recently got ComfyUI set up to run image gen & 3d gen models and that was also very easy, took <10 minutes.
I can't believe this machine is still going for $2,800 with 128 GB. It's an incredible value.
I've been a long-time Apple user (and long-time user of Linux for work + part-time for personal), but have been trying out Arch and hyprland on my decade+ old ThinkPad and have been surprised at how enjoyable the experience is. I'm thinking it might just be the tipping point for leaving Apple.
What do you mean? On Linux I can dynamically allocate memory between CPU and GPU. Just have to set a few kernel parameters to set the max allowable allocation to the GPU, and set the BIOS to the minimum amount of dedicated graphics memory.
Apple has none of this.
Setting the kernel params is a one-time initial setup thing. You have 128 GB of RAM, set it to 120 or whatever as the max VRAM. The LLM will use as much as it needs and the rest of the system will use as much it needs. Fully dynamic with real-time allocation of resources. Honestly I literally haven't even thought of it after setting those kernel args a while ago.
So: "options ttm.pages_limit=31457280 ttm.page_pool_size=31457280", reboot, and that's literally all you have to do.
Oh and even that is only needed because the AMD driver defaults it to something like 35-48 GB max VRAM allocation. It is fully dynamic out of the box, you're only configuring the max VRAM quota with those params. I'm not sure why they choice that number for the default.
On Windows, I think you're right, it's max 96 GiB to the GPU and it requires a reboot to change it.
Only Apple and AMD have APUs with relatively fast iGPU that becomes relevant in large local LLM(>7b) use cases.
An OpenRunPod with decent usage might encourage more non-leading labs to dump foundation models into the commons. We just need infra to run it. Distilling them down to desktop is a fool's errand. They're meant to run on DC compute.
I'm fine with running everything in the cloud as long as we own the software infra and the weights.
This is conceivably the only way we could catch up to Claude Code is to have the Chinese start releasing their best coding models and for them to get significant traction with companies calling out to hosted versions. Otherwise, we're going to be stuck in a take off scenario with no bridge.
Instead you should give it tools to search over the mailbox for terms, labels, addresses, etc. so that the model can do fine grained filters based on the query, read the relevant emails it finds, then answer the question.
As an example of the kind of query I'm interested in, I want a model that can tell me all the flights I took within a given time range (so that means it'd have to filter out cancellations). Or, for a given flight, the arrival and departure times and time zones (or the city and country so I can look up the time zone). Stuff like that. (Travel is just an example obviously, I have other topics to ask about.) It's not a terribly large number of emails to search through in each query, but the email structures are too heterogeneous across senders to write custom tooling for each case.
The local models are quite weak here.
My question is really just about what can handle that volume of data (ideally, with the quoted sections/duplications/etc. that come with email chains) and still produce useful (textual) output.
Couldn't someone just send you an email with instructions to "jailbreak" your local model?