Jump to content

Recommended Posts

Guest elbruno
Posted

Hi!

 

 

 

I written several posts about how to use Local Large Language Models with C#.

 

But what about Small Language Models?

 

 

Well, today's demo is an example on how to use SLMs this with ONNX. So let's start with a quick intro to Phi-3, ONNX and why to use ONNX. And then, let's showcase some interesting code sample and respirces.

 

 

 

In the Phi-3 C# Labs sample repo we can find several samples, including a Console Application that loads a Phi-3 Vision model with ONNX, and analyze and describe an image.

 

 

 

largevv2px999.gif.6b15d27c72ea7335fde4111f63b23cc7.gif

 

 

 

[HEADING=1]Introduction to Phi-3 Small Language Model[/HEADING]

 

The Phi-3 Small Language Model (SLM) represents a groundbreaking advancement in AI, developed by Microsoft. It’s part of the Phi-3 family, which includes the most capable and cost-effective SLMs available today. These models outperform others of similar or even larger sizes across various benchmarks, including language, reasoning, coding, and math tasks. The Phi-3 models, including the Phi-3-mini, Phi-3-small, and Phi-3-medium, are designed to be instruction-tuned and optimized for ONNX Runtime, ensuring broad compatibility and high performance.

 

 

 

You can learn more about Phi-3 in:

 

[HEADING=1] [/HEADING]

[HEADING=1]Introduction to ONNX[/HEADING]

 

ONNX, or Open Neural Network Exchange, is an open-source format that allows AI models to be portable and interoperable across different frameworks and hardware. It enables developers to use the same model with various tools, runtimes, and compilers, making it a cornerstone for AI development. ONNX supports a wide range of operators and offers extensibility, which is crucial for evolving AI needs.

 

[HEADING=1] [/HEADING]

[HEADING=1]Why to Use ONNX on Local AI Development[/HEADING]

 

Local AI development benefits significantly from ONNX due to its ability to streamline model deployment and enhance performance. ONNX provides a common format for machine learning models, facilitating the exchange between different frameworks and optimizing for various hardware environments.

 

 

 

For C# developers, this is particularly useful because we have a set of libraries specifically created to work with ONNX models. In example:

 

Microsoft.ML.OnnxRuntime, GitHub - microsoft/onnxruntime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

 

[HEADING=1]C# ONNX and Phi-3 and Phi-3 Vision[/HEADING]

 

 

The Phi-3 Coookbook GitHub repository contains C# labs and workshops sample projects that demonstrate the use of Phi-3 mini and Phi-3-Vision models in .NET applications.

 

 

 

It showcases how these powerful models can be utilized for tasks like question-answering and image analysis within a .NET environment.

 

 

 

 

Project Description Location
LabsPhi301 This is a sample project that uses a local phi3 model to ask a question. The project load a local ONNX Phi-3 model using the [iCODE]Microsoft.ML.OnnxRuntime[/iCODE] libraries. .\src\LabsPhi301\
LabsPhi302 This is a sample project that implement a Console chat using Semantic Kernel. .\src\LabsPhi302\
LabsPhi303 This is a sample project that uses a local phi3 vision model to analyze images.. The project load a local ONNX Phi-3 Vision model using the [iCODE]Microsoft.ML.OnnxRuntime[/iCODE] libraries. .\src\LabsPhi303\
LabsPhi304 This is a sample project that uses a local phi3 vision model to analyze images.. The project load a local ONNX Phi-3 Vision model using the [iCODE]Microsoft.ML.OnnxRuntime[/iCODE] libraries. The project also presents a menu with different options to interacti with the user. .\src\LabsPhi304\

 

 

 

To run the projects, follow these steps:

 

 

 


  1. Clone the repository to your local machine.
     
     

  2. Open a terminal and navigate to the desired project. In example, let's run [iCODE]LabsPhi301[/iCODE].
     
     
    cd .\src\LabsPhi301\
     

  3. Run the project with the command
     
     
    dotnet run
     

  4. The sample project ask for a user input and replies using the local mode.
     
    The running demo is similar to this one:
     

 

largevv2px999.gif.380403843f9d292e626db078a6f54087.gif

 

 

 

[HEADING=1]Sample Console Application to use a ONNX model[/HEADING]

 

 

 

Let's take a look at the 1st demo application with the following code snippet is from [iCODE]/src/LabsPhi301/Program.cs[/iCODE]. The main steps to use a model with ONNX are:

 

  • The Phi-3 model, stored in the [iCODE]modelPath[/iCODE] , is loaded into a [iCODE]Model[/iCODE] object.
  • This model is then used to create a [iCODE]Tokenizer[/iCODE] which will be responsible for converting our text inputs into a format that the model can understand.

 

And this is the chatbot implementation.

 

 

  • The chatbot operates in a continuous loop, waiting for user input.
  • When a user types a question, the question is combined with a system prompt to form a full prompt.
  • The full prompt is then tokenized and passed to a [iCODE]Generator[/iCODE] object.
  • The generator, configured with specific parameters, generates a response one token at a time.
  • Each token is decoded back into text and printed to the console, forming the chatbot's response.
  • The loop continues until the user decides to exit by entering an empty string.

 

 

 

 

 

 

 

using Microsoft.ML.OnnxRuntimeGenAI;

var modelPath = @"D:\phi3\models\Phi-3-mini-4k-instruct-onnx\cpu_and_mobile\cpu-int4-rtn-block-32";
var model = new Model(modelPath);
var tokenizer = new Tokenizer(model);

var systemPrompt = "You are an AI assistant that helps people find information. Answer questions using a direct style. Do not share more information that the requested by the users.";

// chat start
Console.WriteLine(@"Ask your question. Type an empty string to Exit.");

// chat loop
while (true)
{
   // Get user question
   Console.WriteLine();
   Console.Write(@"Q: ");
   var userQ = Console.ReadLine();    
   if (string.IsNullOrEmpty(userQ))
   {
       break;
   }

   // show phi3 response
   Console.Write("Phi3: ");
   var fullPrompt = $"<|system|>{systemPrompt}<|end|><|user|>{userQ}<|end|><|assistant|>";
   var tokens = tokenizer.Encode(fullPrompt);

   var generatorParams = new GeneratorParams(model);
   generatorParams.SetSearchOption("max_length", 2048);
   generatorParams.SetSearchOption("past_present_share_buffer", false);
   generatorParams.SetInputSequences(tokens);

   var generator = new Generator(model, generatorParams);
   while (!generator.IsDone())
   {
       generator.ComputeLogits();
       generator.GenerateNextToken();
       var outputTokens = generator.GetSequence(0);
       var newToken = outputTokens.Slice(outputTokens.Length - 1, 1);
       var output = tokenizer.Decode(newToken);
       Console.Write(output);
   }
   Console.WriteLine();
}

 

 

 

 

 

 

This is a great example of how you can leverage the power of Phi-3 and ONNX in a C# application to create an interactive AI experience. Please take a look at the other scenarios and if you have any questions, we are happy to receive your feedback!

 

Best

 

Bruno Capuano

 

 

 

Note: Part of the content of this post was generated by Microsoft Copilot, an AI assistant.

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...