Update README.md
Browse files
README.md
CHANGED
|
@@ -131,7 +131,7 @@ Change `-t 10` to the number of physical CPU cores you have. For example if your
|
|
| 131 |
|
| 132 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
| 133 |
|
| 134 |
-
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically.
|
| 135 |
|
| 136 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
| 137 |
|
|
|
|
| 131 |
|
| 132 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
| 133 |
|
| 134 |
+
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically.
|
| 135 |
|
| 136 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
| 137 |
|