• Features
  • Pricing
  • Contact
    Login with:
  • Features
  • Pricing
  • Contact
Login with:
main
ollama-run-llama3.1-8b
Repository badge with CPlus rating
View on GitHub

  • 955 issues
  • 0 pull requests
  • 533 files
  • 1 active branch

CodeFactor Rating C+

  • {{format.Name}}
Grade
Name
Complexity
Churn
Issues
A
ml\backend\ggml\ggml\src\ggml-cpu\amx\amx.cpp
37 1 -
A
llama\runner\cache.go
37 1 1
A
progress\bar.go
36 1 -
A
types\model\name_test.go
35 1 -
A
llama\runner\image.go
34 1 1
A
llama\llama.cpp\examples\llava\llava.cpp
33 1 2
A
app\lifecycle\server.go
33 1 2
B
openai\openai_test.go
32 2 8
A
model\mllama\imageproc.go
32 1 -
A
server\manifest.go
32 1 1
A
discover\gpu_linux.go
32 1 1
A
convert\convert_llama.go
31 1 3
A
ml\backend\ggml\ggml\src\ggml-impl.h
31 1 -
A
model\mllama\imageproc_test.go
31 1 -
A
convert\convert_llama_adapter.go
30 1 3
A
convert\reader_safetensors.go
30 1 1
A
discover\gpu_info_oneapi.c
30 1 -
A
discover\gpu_windows.go
30 1 1
A
discover\gpu_info_nvcuda.c
27 1 1
A
readline\history.go
27 2 -
  • Previous Page
  • Next Page
  • Security
  • Terms
  • Privacy
  • Contact
© 2025 CodeFactor

We use cookies in order to offer you the most relevant information. See our Privacy Policy.