Page 1 of 1

Using llama64.lib with Harbour

PostPosted: Tue Mar 26, 2024 10:13 pm
by Antonio Linares
Thanks to Alexander Kressin:

git clone https://gitflic.ru/project/alkresin/llama_prg.git

to build it:

go.bat
Code: Select all  Expand view
@setlocal
call "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64
cl -c -TP -Ic:\harbour\include -Illama.cpp -Illama.cpp\common /EHsc source\hllama.cpp
cl -c -TP -Illama.cpp -Illama.cpp\common /EHsc llama.cpp\llama.cpp
cl -c -Illama.cpp -Illama.cpp\common /EHsc llama.cpp\ggml.c llama.cpp\ggml-alloc.c llama.cpp\ggml-backend.c llama.cpp\ggml-quants.c llama.cpp\common\sampling.cpp llama.cpp\common\common.cpp llama.cpp\common\grammar-parser.cpp llama.cpp\common\build-info.cpp
lib /OUT:llama64.lib hllama.obj llama.obj ggml.obj ggml-alloc.obj ggml-backend.obj ggml-quants.obj sampling.obj common.obj grammar-parser.obj build-info.obj
@endlocal

to build the examples:
Code: Select all  Expand view
@setlocal
call "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64
c:\harbour\bin\win\msvc64\hbmk2.exe test1.prg -lllama64
@endlocal

Re: Using llama64.lib with Harbour

PostPosted: Wed Mar 27, 2024 10:53 am
by Silvio.Falconi

Re: Using llama64.lib with Harbour

PostPosted: Wed Mar 27, 2024 11:04 am
by Antonio Linares

Re: Using llama64.lib with Harbour

PostPosted: Wed Mar 27, 2024 11:11 am
by Antonio Linares
We are checking it with Alexander as it is running very slow, and we don't know why yet

Llama.cpp main.exe runs much faster

Re: Using llama64.lib with Harbour

PostPosted: Thu Mar 28, 2024 12:16 pm
by Otto
Some provider block .ru.