Page 2 of 4

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Tue Dec 19, 2023 9:35 am
by Jimmy
hi Antonio,

thx for Link
have installled Runtime and re-boot PC and put llama64.dll and phi-2_Q4_K_M.gguf into same Folder like EXE
but still have no luck and got Error to load llama64.dll/phi-2_Q4_K_M.gguf ...

which FWH Version is need for that Sample :?:

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Tue Dec 19, 2023 11:10 am
by Antonio Linares
Dear Jimmy,

solved,

please download this llama64.dll:
https://github.com/FiveTechSoft/FWH_tools/blob/master/llama64.dll

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Tue Dec 19, 2023 11:42 am
by Antonio Linares
Steps to build llama64.dll:

1. git clone https://github.com/ggerganov/llama.cpp

2. cd llama.cpp

3. mkdir temp

4. cd temp

5. cmake ..

6. open created llama.cpp.sln using Visual Studio and select "Release" at the top bar

7. On the project tree, right click on build_info, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"

8. On the project tree, right click on common, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"

8. On the project tree, right click on ggml, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"

10. On the project tree, right click on llama, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"

11. On the project tree, right click on simple, select properties and in C/C++ "code generation", "runtime library" select "Multi-threaded (/MT)"

12. On the project tree, click on simple and edit simple.cpp

13. Replace this code:
Code: Select all  Expand view
int main(int argc, char ** argv) {
    gpt_params params;

    if (argc == 1 || argv[1][0] == '-') {
        printf("usage: %s MODEL_PATH [PROMPT]\n" , argv[0]);
        return 1 ;
    }

    if (argc >= 2) {
        params.model = argv[1];
    }

    if (argc >= 3) {
        params.prompt = argv[2];
    }

    if (params.prompt.empty()) {
        params.prompt = "Hello my name is";
    }

    // total length of the sequence including the prompt
    const int n_len = 32;

with this one:
Code: Select all  Expand view
typedef void (*PFUNC) (char* szToken);

extern "C" __declspec (dllexport) int Llama(char* szModel, char* szPrompt, PFUNC pCallBack) {
    gpt_params params;

    params.model = szModel;
    params.prompt = szPrompt;
    params.sparams.temp = 0.7;

    // total length of the sequence including the prompt
    const int n_len = 512;
 

14. Replace:
Code: Select all  Expand view
// LOG_TEE("%s", llama_token_to_piece(ctx, new_token_id).c_str());

with:
Code: Select all  Expand view
pCallBack((char*)llama_token_to_piece(ctx, new_token_id).c_str());


15. On the project tree, right click on simple , "Configuration properties", "general", "Configuration Type" and select "Dynamic Library (.dll)"

16. On the project tree, right click on simple , "Configuration properties", "Advanced", "Target File Extension" and select ".dll" intead of ".exe"

17. On the project tree, right click on simple and select "rebuild"

18. rename simple.dll as llama64.dll

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Tue Dec 19, 2023 10:53 pm
by Jimmy
hi Antonio,
Antonio Linares wrote:solved,

please download this llama64.dll:
https://github.com/FiveTechSoft/FWH_tools/blob/master/llama64.dll

have download latest FWH_tools-master.zip and use new (bigger) llama64.dll but still got Error Message :(

what Hardware do i need for that KI-Sample :?:
i do not have a external Graphic Card, only internal IGP

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Wed Dec 20, 2023 6:26 am
by alerchster
Hi Jimmy;

I only ever had error 0 in the program with the downloaded DLL. Then I created llama64.dll myself according to the information above and it worked immediately.

After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.

I have it running on a DELL XPS 9530 notebook with 16GB RAM under Windows 10 Home 64Bit.

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Wed Dec 20, 2023 6:32 am
by Jimmy
hi,
alerchster wrote:I only ever had error 0 in the program with the downloaded DLL.
Then I created llama64.dll myself according to the information above and it worked immediately.

thx for Answer

as i got same error 0 i will try same Way as you did and follow Antonios Instruction

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Wed Dec 20, 2023 8:32 am
by Antonio Linares
We encourage you to try different GGUFs from HuggingFace until you find the best that works for you

Orca2 large works very well though it is slower. Check for yourself the best one for you and please share your results here :-)

There is a new project, based on llama.cpp, that works 12% faster than llama.cpp, we keep an eye on AI development to offer you the best AI tools for your FWH apps ;-)
https://github.com/SJTU-IPADS/PowerInfer

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Wed Dec 20, 2023 9:34 am
by Antonio Linares
If you think about this, these LLM models are databases that are able to reasoning...

what a huge step on data processing :-)

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Wed Dec 20, 2023 12:02 pm
by Antonio Linares

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Thu Dec 21, 2023 3:33 am
by Jimmy
hi,

need Help to create llama64.dll

have download and install "GIT for Windows"
after Setup a small Bash Console Window open
Image
but i got stuck at "5. cmake .."

please advice me what to do

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Thu Dec 21, 2023 4:01 am
by alerchster
Hi Jimmy,

cmake is part of VS!

After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Thu Dec 21, 2023 4:40 am
by Antonio Linares
Jimmy wrote:hi,

need Help to create llama64.dll

have download and install "GIT for Windows"
after Setup a small Bash Console Window open
Image
but i got stuck at "5. cmake .."

please advice me what to do


Dear Jimmy, you have to download and install cmake:
https://cmake.org/download/

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Thu Dec 21, 2023 6:11 am
by Antonio Linares
Dear Jimmy,

Could you kindly adapt this code to HMG ?
Code: Select all  Expand view
#include "Fivewin.ch"

function Main()

   local oDlg, cPrompt := PadR( "List 10 possible uses of AI from my Windows apps.", 200 )
   local cAnswer := "", oAnswer, oBtn

   DEFINE DIALOG oDlg SIZE 700, 500 TITLE "FWH AI"

   @ 1, 1 GET cPrompt SIZE 300, 15

   @ 3, 1 GET oAnswer VAR cAnswer MULTILINE SIZE 300, 200

   @ 0.7, 52.5 BUTTON oBtn PROMPT "start" ;
      ACTION ( oBtn:Disable(), Llama( "phi-2_Q4_K_M.gguf", RTrim( cPrompt ),;
               CallBack( { | cStr | oAnswer:SetFocus(), oAnswer:Append( cStr ) } ) ),;
               oBtn:Enable(), oBtn:SetFocus() )

   @ 2.2, 52.5 BUTTON "Clear" ACTION oAnswer:SetText( "" )

   ACTIVATE DIALOG oDlg CENTERED

return nil

DLL FUNCTION Llama( cModel AS LPSTR, cPrompt AS LPSTR, pFunc AS PTR ) AS VOID PASCAL LIB "llama64.dll"

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Thu Dec 21, 2023 10:17 am
by Jimmy
hi,
alerchster wrote:cmake is part of VS!

After step 4 at the latest, depending on the VS version, "%ProgramFiles%\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" amd64 should be executed, otherwise "cmake .." would not work.

thx for Answer,
after call vcvarsall, which made Cmake available, how is Syntax to generate *.SLN for next Step 6.) :?:

Re: Using Microsoft AI Phi-2 from FWH

PostPosted: Thu Dec 21, 2023 10:26 am
by Jimmy
hi Antonio,
Antonio Linares wrote:Could you kindly adapt this code to HMG ?

i´m not sure about "Callback" under HMG

will FWH CODE work when "LoadLibrary", "llama64.dll" or do i need DLL FUNCTION :?: