Using Microsoft AI Phi-2 from FWH
Re: Using Microsoft AI Phi-2 from FWH
hi Antonio,
thx for Link
have installled Runtime and re-boot PC and put llama64.dll and phi-2_Q4_K_M.gguf into same Folder like EXE
but still have no luck and got Error to load llama64.dll/phi-2_Q4_K_M.gguf ...
which FWH Version is need for that Sample
thx for Link
have installled Runtime and re-boot PC and put llama64.dll and phi-2_Q4_K_M.gguf into same Folder like EXE
but still have no luck and got Error to load llama64.dll/phi-2_Q4_K_M.gguf ...
which FWH Version is need for that Sample
greeting,
Jimmy
Jimmy
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
Dear Jimmy,
solved,
please download this llama64.dll:
https://github.com/FiveTechSoft/FWH_too ... lama64.dll
solved,
please download this llama64.dll:
https://github.com/FiveTechSoft/FWH_too ... lama64.dll
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
Steps to build llama64.dll:
1. git clone https://github.com/ggerganov/llama.cpp
2. cd llama.cpp
3. mkdir temp
4. cd temp
5. cmake ..
6. open created llama.cpp.sln using Visual Studio and select "Release" at the top bar
7. On the project tree, right click on build_info, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
8. On the project tree, right click on common, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
8. On the project tree, right click on ggml, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
10. On the project tree, right click on llama, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
11. On the project tree, right click on simple, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
12. On the project tree, click on simple and edit simple.cpp
13. Replace this code:
with this one:
14. Replace:
with:
15. On the project tree, right click on simple , "Configuration properties", "general", "Configuration Type" and select "Dynamic Library (.dll)"
16. On the project tree, right click on simple , "Configuration properties", "Advanced", "Target File Extension" and select ".dll" intead of ".exe"
17. On the project tree, right click on simple and select "rebuild"
18. rename simple.dll as llama64.dll
1. git clone https://github.com/ggerganov/llama.cpp
2. cd llama.cpp
3. mkdir temp
4. cd temp
5. cmake ..
6. open created llama.cpp.sln using Visual Studio and select "Release" at the top bar
7. On the project tree, right click on build_info, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
8. On the project tree, right click on common, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
8. On the project tree, right click on ggml, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
10. On the project tree, right click on llama, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
11. On the project tree, right click on simple, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"
12. On the project tree, click on simple and edit simple.cpp
13. Replace this code:
Code: Select all | Expand
int main(int argc, char ** argv) {
gpt_params params;
if (argc == 1 || argv[1][0] == '-') {
printf("usage: %s MODEL_PATH [PROMPT]\n" , argv[0]);
return 1 ;
}
if (argc >= 2) {
params.model = argv[1];
}
if (argc >= 3) {
params.prompt = argv[2];
}
if (params.prompt.empty()) {
params.prompt = "Hello my name is";
}
// total length of the sequence including the prompt
const int n_len = 32;
Code: Select all | Expand
typedef void (*PFUNC) (char* szToken);
extern "C" __declspec (dllexport) int Llama(char* szModel, char* szPrompt, PFUNC pCallBack) {
gpt_params params;
params.model = szModel;
params.prompt = szPrompt;
params.sparams.temp = 0.7;
// total length of the sequence including the prompt
const int n_len = 512;
Code: Select all | Expand
// LOG_TEE("%s", llama_token_to_piece(ctx, new_token_id).c_str());
Code: Select all | Expand
pCallBack((char*)llama_token_to_piece(ctx, new_token_id).c_str());
16. On the project tree, right click on simple , "Configuration properties", "Advanced", "Target File Extension" and select ".dll" intead of ".exe"
17. On the project tree, right click on simple and select "rebuild"
18. rename simple.dll as llama64.dll
Re: Using Microsoft AI Phi-2 from FWH
hi Antonio,
what Hardware do i need for that KI-Sample
i do not have a external Graphic Card, only internal IGP
have download latest FWH_tools-master.zip and use new (bigger) llama64.dll but still got Error MessageAntonio Linares wrote: solved,
please download this llama64.dll:
https://github.com/FiveTechSoft/FWH_too ... lama64.dll
what Hardware do i need for that KI-Sample
i do not have a external Graphic Card, only internal IGP
greeting,
Jimmy
Jimmy
- alerchster
- Posts: 95
- Joined: Mon Oct 22, 2012 4:43 pm
Re: Using Microsoft AI Phi-2 from FWH
Hi Jimmy;
I only ever had error 0 in the program with the downloaded DLL. Then I created llama64.dll myself according to the information above and it worked immediately.
After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.
I have it running on a DELL XPS 9530 notebook with 16GB RAM under Windows 10 Home 64Bit.
I only ever had error 0 in the program with the downloaded DLL. Then I created llama64.dll myself according to the information above and it worked immediately.
After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.
I have it running on a DELL XPS 9530 notebook with 16GB RAM under Windows 10 Home 64Bit.
Regards
Ing. Anton Lerchster
Ing. Anton Lerchster
Re: Using Microsoft AI Phi-2 from FWH
hi,
as i got same error 0 i will try same Way as you did and follow Antonios Instruction
thx for Answeralerchster wrote:I only ever had error 0 in the program with the downloaded DLL.
Then I created llama64.dll myself according to the information above and it worked immediately.
as i got same error 0 i will try same Way as you did and follow Antonios Instruction
greeting,
Jimmy
Jimmy
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
We encourage you to try different GGUFs from HuggingFace until you find the best that works for you
Orca2 large works very well though it is slower. Check for yourself the best one for you and please share your results here
There is a new project, based on llama.cpp, that works 12% faster than llama.cpp, we keep an eye on AI development to offer you the best AI tools for your FWH apps
https://github.com/SJTU-IPADS/PowerInfer
Orca2 large works very well though it is slower. Check for yourself the best one for you and please share your results here
There is a new project, based on llama.cpp, that works 12% faster than llama.cpp, we keep an eye on AI development to offer you the best AI tools for your FWH apps
https://github.com/SJTU-IPADS/PowerInfer
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
If you think about this, these LLM models are databases that are able to reasoning...
what a huge step on data processing
what a huge step on data processing
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
hi,
need Help to create llama64.dll
have download and install "GIT for Windows"
after Setup a small Bash Console Window open
but i got stuck at "5. cmake .."
please advice me what to do
need Help to create llama64.dll
have download and install "GIT for Windows"
after Setup a small Bash Console Window open
but i got stuck at "5. cmake .."
please advice me what to do
greeting,
Jimmy
Jimmy
- alerchster
- Posts: 95
- Joined: Mon Oct 22, 2012 4:43 pm
Re: Using Microsoft AI Phi-2 from FWH
Hi Jimmy,
cmake is part of VS!
After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.
cmake is part of VS!
After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.
Regards
Ing. Anton Lerchster
Ing. Anton Lerchster
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
Dear Jimmy, you have to download and install cmake:Jimmy wrote:hi,
need Help to create llama64.dll
have download and install "GIT for Windows"
after Setup a small Bash Console Window open
but i got stuck at "5. cmake .."
please advice me what to do
https://cmake.org/download/
- Antonio Linares
- Site Admin
- Posts: 42258
- Joined: Thu Oct 06, 2005 5:47 pm
- Location: Spain
- Contact:
Re: Using Microsoft AI Phi-2 from FWH
Dear Jimmy,
Could you kindly adapt this code to HMG ?
Could you kindly adapt this code to HMG ?
Code: Select all | Expand
#include "Fivewin.ch"
function Main()
local oDlg, cPrompt := PadR( "List 10 possible uses of AI from my Windows apps.", 200 )
local cAnswer := "", oAnswer, oBtn
DEFINE DIALOG oDlg SIZE 700, 500 TITLE "FWH AI"
@ 1, 1 GET cPrompt SIZE 300, 15
@ 3, 1 GET oAnswer VAR cAnswer MULTILINE SIZE 300, 200
@ 0.7, 52.5 BUTTON oBtn PROMPT "start" ;
ACTION ( oBtn:Disable(), Llama( "phi-2_Q4_K_M.gguf", RTrim( cPrompt ),;
CallBack( { | cStr | oAnswer:SetFocus(), oAnswer:Append( cStr ) } ) ),;
oBtn:Enable(), oBtn:SetFocus() )
@ 2.2, 52.5 BUTTON "Clear" ACTION oAnswer:SetText( "" )
ACTIVATE DIALOG oDlg CENTERED
return nil
DLL FUNCTION Llama( cModel AS LPSTR, cPrompt AS LPSTR, pFunc AS PTR ) AS VOID PASCAL LIB "llama64.dll"
Re: Using Microsoft AI Phi-2 from FWH
hi,
after call vcvarsall, which made Cmake available, how is Syntax to generate *.SLN for next Step 6.)
thx for Answer,alerchster wrote: cmake is part of VS!
After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.
after call vcvarsall, which made Cmake available, how is Syntax to generate *.SLN for next Step 6.)
greeting,
Jimmy
Jimmy
Re: Using Microsoft AI Phi-2 from FWH
hi Antonio,
will FWH CODE work when "LoadLibrary", "llama64.dll" or do i need DLL FUNCTION
i´m not sure about "Callback" under HMGAntonio Linares wrote: Could you kindly adapt this code to HMG ?
will FWH CODE work when "LoadLibrary", "llama64.dll" or do i need DLL FUNCTION
greeting,
Jimmy
Jimmy