Those are OpenCL platform and device identifiers, you can use clinfo to find out which numbers are what on your system.
Also note that if you’re building kobold.cpp yourself, you need to build with LLAMA_CLBLAST=1 for OpenCL support to exist in the first place. Or LLAMA_CUBLAS for CUDA.
What’s the problem you’re having with kobold? It doesn’t really require any setup. Download the exe, click on it, select model in the window, click launch. The webui should open in your default browser.
Small update, take what I said about the breakage at 6000 tokens with a pinch of salt, testing is complicated by something somewhere breaking in a way that persists through generations and even kobold.cpp restarts… Must be some driver issue with CUDA because it takes a PC reboot to resolve, then the exact same generation goes from gibberish to correct.
all… Subscribed and local seem to be working correctly, some of the posts on them are new
Top day is a good tip. Though I do think something is broken, seems too unlikely that this particular batch of shitposts is so uniquely hot it stays up all day when before the feed was moving quite fast
Reddit has over 2,000 employees most of whom are doing bullshit nobody using the site actually needs or wants, it’s possible to run a lot leaner than that. Like Reddit itself used to, before they started burning hundreds of millions trying to compete with every other social media site at once instead of being Reddit
You are supposed to manually set scale to 1.0 and base to 10000 when using llama 2 with 4096 context. The automatic scaling assumes the model was trained for 2048. Though as I say in the OP, that still doesn’t work, at least with this particular fine tune.