LEECHER = BAN
[ Hidden Content! ]
hey guys,
today i will guide you on how to setup an uncensored AI quick and cheap.
requirements:
OPSEC (of course
)
a vcc which works with stripe.
so at first you go to https://www.runpod.io/ set up the billing etc.
after that you need to buy a gpu server here:
https://www.runpod.io/console/gpu-secure-cloud
from my experience this one worked the best:
https://imgur.com/RYCi39h
but you can choose any other with similar specs but please not worse. from my experience this was good enough to run a very capable LM (Language Model)
after choosing the gpu server you are asked if you want to install a template.
then you need to choose thebloke local llm oneclick ui. this is great as you can install and manage your Language Model very easy.
This will take 1-2 minutes and then you can already start your web ui like this:
https://imgur.com/a/B2NXsxo
now you head over to model and then download or install lora and enter this:
TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
original model link:
https://huggingface.co/TheBloke/Wizard-V...sored-GPTQ
Now the AI will download. this can take some minutes but not more than 5 minutes.
After that you click on the reload button, choose the model and choose the following settings and then load:
https://imgur.com/a/VSvCkqe
now we done!![[Image: smile.png]](https://patched.sh/pbb-proxy/UUNCQ0JeTUoGRVQEAg1fWhRCD0QWQ00aWl4AVwNDGRZUXlpaVBdNFgleXQBPFVdS/smile.png)
now it's up to you what you are doing with the LM. You can either start a chat or go to default and write an instruction. no tracking and dependency from OpenAI.
Now if you are a little bit techy like me you can connect to the LM also through an api. here is a code example which i adjust a little bit for runpod (python). for the url copy the runpod port 5000 url:
https://paste.fo/44c7b160cd37
some facts about wizardlm.
here you can see a test which shows how good the quality response is:
https://imgur.com/BGAw1pL
as you can see it's pretty close to GPT-4 by openai which is impressive for a free and open source LM. AND IT'S UNCENSORED. so if you want to learn to cook crack (hypothetical
) this is a good resource.
if you want a guide on how to setup a LM on your local machine let me know! it's not that capable as this one for instructions but it's fine for some generic work. if you have any questions, send me pm, write a post under this thread or hmu on tg. i will try to answer every question when i have time!
today i will guide you on how to setup an uncensored AI quick and cheap.
requirements:
OPSEC (of course
)a vcc which works with stripe.
so at first you go to https://www.runpod.io/ set up the billing etc.
after that you need to buy a gpu server here:
https://www.runpod.io/console/gpu-secure-cloud
from my experience this one worked the best:
https://imgur.com/RYCi39h
but you can choose any other with similar specs but please not worse. from my experience this was good enough to run a very capable LM (Language Model)
after choosing the gpu server you are asked if you want to install a template.
then you need to choose thebloke local llm oneclick ui. this is great as you can install and manage your Language Model very easy.
This will take 1-2 minutes and then you can already start your web ui like this:
https://imgur.com/a/B2NXsxo
now you head over to model and then download or install lora and enter this:
TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
original model link:
https://huggingface.co/TheBloke/Wizard-V...sored-GPTQ
Now the AI will download. this can take some minutes but not more than 5 minutes.
After that you click on the reload button, choose the model and choose the following settings and then load:
https://imgur.com/a/VSvCkqe
now we done!
![[Image: smile.png]](https://patched.sh/pbb-proxy/UUNCQ0JeTUoGRVQEAg1fWhRCD0QWQ00aWl4AVwNDGRZUXlpaVBdNFgleXQBPFVdS/smile.png)
now it's up to you what you are doing with the LM. You can either start a chat or go to default and write an instruction. no tracking and dependency from OpenAI.
Now if you are a little bit techy like me you can connect to the LM also through an api. here is a code example which i adjust a little bit for runpod (python). for the url copy the runpod port 5000 url:
https://paste.fo/44c7b160cd37
some facts about wizardlm.
here you can see a test which shows how good the quality response is:
https://imgur.com/BGAw1pL
as you can see it's pretty close to GPT-4 by openai which is impressive for a free and open source LM. AND IT'S UNCENSORED. so if you want to learn to cook crack (hypothetical
) this is a good resource.if you want a guide on how to setup a LM on your local machine let me know! it's not that capable as this one for instructions but it's fine for some generic work. if you have any questions, send me pm, write a post under this thread or hmu on tg. i will try to answer every question when i have time!
WorkingThis leak has been rated as still working 0 times this week. (1 in total)