madlad400-3b-mt llama.cpp

I'm using fairydreaming/T5-branch, I'm not sure current llama-cpp-python support t5

Model-Q8_0-GGUF, Reference1, Reference2

Model

Select the AI model to use for chat

512 2048
0.1 2
0.1 1
1 100
1 2