xtsho
Well-known member
I use Microsoft CoPilot occasionally just to see what it has to say. I sometimes cut and paste snippets of text from various sources but I always check with other sources to verify accuracy. But what I post is my own writing because I actually enjoy writing myself.
The thing with the publicly available chat is that they censor what the model will do. Some queries are rejected.
The good thing is that it's simple to run an uncensored Llama 3.2 model with 18 billion parameters locally on your own computer. You don't need to know how to code either. You can run your own chat in something like LMStudio and use any model you like or you can fine tune your own with your data.
lmstudio.ai
WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Light HORROR. Swearing. UNCENSORED... humor, romance, fun.
It is a LLama 3.2 model, max context of 128k (131,000) using mixture of experts to combine EIGHT top L3.2 3B models into one massive powerhouse at 18.4B parameters (equal to 24B - 8 X 3B).
The thing with the publicly available chat is that they censor what the model will do. Some queries are rejected.
The good thing is that it's simple to run an uncensored Llama 3.2 model with 18 billion parameters locally on your own computer. You don't need to know how to code either. You can run your own chat in something like LMStudio and use any model you like or you can fine tune your own with your data.

LM Studio - Discover, download, and run local LLMs
Run Llama, Gemma 3, DeepSeek locally on your computer.


DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Light HORROR. Swearing. UNCENSORED... humor, romance, fun.
Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
It is a LLama 3.2 model, max context of 128k (131,000) using mixture of experts to combine EIGHT top L3.2 3B models into one massive powerhouse at 18.4B parameters (equal to 24B - 8 X 3B).