1 / 23

Building API Powered Chatbot & Application using AI SDK

Build Your Vision into Powerful Software!<br><br>At Murmu Software Infotech, we specialize in building custom software, apps, APIs, and full-stack platforms u2014 tailored exactly for your business needs.<br><br>From .NET, PHP, Java to React, Next.js & AI, we deliver scalable, secure, and high-performance solutions for web, desktop, and mobile.<br><br>Custom ERPs | eCommerce Platforms | SaaS Products | AI-Powered Apps<br><br>Scalable Code<br>Expert Engineers<br>Ongoing Support<br><br>Get in Touch Today!<br>Call/WhatsApp: 91 9110176498 | 91 8235264677<br>Website: www.murmusoftwareinfotech.com<br>Youtube Limk: https://www.youtube.com/watch?v

Download Presentation

Building API Powered Chatbot & Application using AI SDK

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building API Powered Chatbot & Application using AI SDK  #1. Foundations   What is AI SDK?  The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications and agents with React, Next.js, Node.js, and other JS frameworks.   What does the AI SDK provide?  Integrating large language models (LLMs) into applications is complicated and heavily dependent on the specific models. So, the AI SDK Core & UI for easy implementations.  AI SDK Core: A unified API for generating text, structured objects, tool calls, and building agents with LLMs.  AI SDK UI: A set of framework-agnostic hooks for quickly building chat and generative user interfaces.  Popular Model Providers: OpenAI, xAI Grok, Azure, Cohere, Antropic, The economic impact of AI on jobs, businesses, and global markets. and more .. 

  2. #2. AI SDK Overview  The AI SDK standardizes integrating artificial intelligence (AI) models across supported providers. This makes it easy for developers to build great AI applications without much focusing on technical details.  For example, generate text with OpenAI models using the AI SDK:  import { generateText } from "ai" import { openai } from "@ai-sdk/openai" const { text } = await generateText({ model: openai("o3-mini"), prompt: "What is Chatgpt?" })  

  3. AI SDK Concepts:  Generative Artificial Intelligence:  Generative artificial intelligence models predict and generate various types of outputs (such as text, images, or audio) based on what’s statistically likely, pulling from patterns they’ve learned from their training data. For example: Given a photo, a generative model can generate a caption. Given an audio file, a generative model can generate a transcription. Given a text description, a generative model can generate an image.  

  4. large language model (LLM):  A Large Language Model (LLM) is a type of AI that works with text. It reads a sentence and tries to guess what words should come next. It picks the most likely words based on patterns it has learned. It keeps generating words until it reaches an end point.   Embedding Models  An embedding model is used to convert complex data (like words or images) into a dense vector (a list of numbers) representation, known as an embedding. Unlike generative models, embedding models do not generate new text or data. Instead, they provide representations of semantic and syntactic relationships between entities that can be used as input for other models or other natural language processing tasks.  The economic impact of AI on jobs, businesses, and global markets.

  5. #3. Providers and Models  Companies such as OpenAI and Anthropic (providers) offer access to a range of large language models (LLMs) with differing strengths and capabilities. Each provider typically has its own unique method for interfacing with their models, complicating the process of switching providers and increasing the risk of vendor lock-in. To solve these challenges, AI SDK Core offers a standardized approach to interacting with LLMs through a language model specification that abstracts differences between providers. This unified interface allows you to switch between providers with ease while using the same API for all providers. Here is an overview of the AI SDK Provider Architecture: 

  6. The economic impact of AI on jobs, businesses, and global markets.

  7. AI SDK Providers The AI SDK comes with a wide range of providers that you can use to interact with different language models:  xAI Grok Provider (@ai-sdk/xai) OpenAI Provider (@ai-sdk/openai) Azure OpenAI Provider (@ai-sdk/azure) Anthropic Provider (@ai-sdk/anthropic) Amazon Bedrock Provider (@ai-sdk/amazon-bedrock) Google Generative AI Provider (@ai-sdk/google) Google Vertex Provider (@ai-sdk/google-vertex)  Self-Hosted Models: You can access self-hosted models with the following providers: Ollama Provider LM Studio Baseten  

  8. #4. Prompts: Prompts are instructions that you give a large language model (LLM) to tell it what to do. It's like when you ask someone for directions; the clearer your question, the better the directions you'll get. Many LLM providers offer complex interfaces for specifying prompts. They involve different roles and message types. While these interfaces are powerful, they can be hard to use and understand. In order to simplify prompting, the AI SDK supports text, message, and system prompts.  Text Prompts: Text prompts are strings, ideal for simple generation use cases, e.g. repeatedly generating content for variants of the same prompt text. You can set text prompts using the prompt property made available by AI SDK functions like streamText or generateObject. You can structure the text in any way and inject variables, e.g. using a template literal. 

  9. const result = await generateText({ model: yourModel, prompt: 'Invent a new holiday and describe its traditions.',  }); You can also use template literals to provide dynamic data to your prompt. const result = await generateText({ model: yourModel, prompt: `I am planning a trip to ${destination} for ${lengthOfStay} days. ` + `Please suggest the best tourist activities for me to do.`,  });

  10. System Prompts System prompts are the initial set of instructions given to models that help guide and constrain the models' behaviors and responses. You can set system prompts using the system property. System prompts work with both the prompt and the messages properties.  const result = await generateText({ model: yourModel, system: `You help planning travel itineraries. ` + `Respond to the users' request with a list ` + `of the best stops to make in their destination.`, prompt: `I am planning a trip to ${destination} for ${lengthOfStay} days. ` + `Please suggest the best tourist activities for me to do.`,  });

  11. Message Prompts A message prompt is an array of user, assistant, and tool messages. They are great for chat interfaces and more complex, multi-modal prompts. You can use the messages property to set message prompts. Each message has a role and a content property. The content can either be text (for user and assistant messages), or an array of relevant parts (data) for that message type.  const result = await streamUI({ model: yourModel, messages: [ { role: 'user', content: 'Hi!' }, { role: 'assistant', content: 'Hello, how can I help?' }, { role: 'user', content: 'Where can I buy the best Currywurst in Berlin?' }, ],  });

  12. User Messages Text Parts: Text content is the most common type of content. It is a string that is passed to the model. If you only need to send text content in a message, the content property can be a string, but you can also use it to send multiple content parts.  const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'Where can I buy the best Currywurst in Berlin?', }, ], }, ],  });

  13. Image Parts User messages can include image parts. An image can be one of the following: base64-encoded image: string with base-64 encoded content & data URL string, e.g. data:image/png;base64,... binary image: ArrayBuffer, Uint8Array, Buffer URL: http(s) URL string, e.g. https://example.com/image.png URL object, e.g. new URL('https://example.com/image.png') 

  14. Tools:- cLarge language models (LLMs) are great at generating text but have trouble with specific tasks like solving math problems, such discrete math, or interacting with the outside world or getting real-time information like the weather. To solve this, tools are used. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response." For example, when you ask an LLM for the "weather in London", and there is a weather tool available, it could call a tool with London as the argument. The tool would then fetch the weather data and return it to the LLM. The LLM can then use this information in its response.  

  15. What is a tool? A tool is an object that can be called by the model to perform a specific task. You can use tools with generateText and streamText by passing one or more tools to the tools parameter. A tool consists of three properties. description: An optional description of the tool that can influence when the tool is picked. parameters: A Zod schema or a JSON schema that defines the parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls. execute: An optional asynchronous function that is called with the arguments from the tool call.   Schemas Schemas are used to define the parameters for tools and to validate the tool calls. The AI SDK supports both raw JSON schemas (using the jsonSchema function) and Zod schemas (either directly or using the zodSchema function).  

  16. import z from 'zod'; const recipeSchema = z.object({ recipe: z.object({ name: z.string(), ingredients: z.array( z.object({ name: z.string(), amount: z.string(), }), ), steps: z.array(z.string()), }), }); 

  17. Toolkits: When you work with tools, you typically need a mix of application specific tools and general purpose tools. There are several providers that offer pre-built tools as toolkits that you can use out of the box: Agentic - A collection of 20+ tools. Most tools connect to access external APIs such as Exa or E2B. Browserbase - Browser tool that runs a headless browser. Interlify - Convert APIs into tools so that AI can connect to your backend in minutes. Freestyle - Tool for your AI to execute JavaScript or TypeScript with arbitrary node modules.

  18. Streaming Large language models (LLMs) are extremely powerful. However, when generating long outputs, they can be very slow compared to the latency you're likely used to. Agentic - A collection of 20+ tools. Most tools connect to access external APIs such as Exa or E2B. Browserbase - Browser tool that runs a headless browser. Interlify - Convert APIs into tools so that AI can connect to your backend in minutes. Freestyle - Tool for your AI to execute JavaScript or TypeScript with arbitrary node modules. The streaming UI is able to start displaying the response much faster than the blocking UI. Streaming interfaces can greatly enhance user experiences, especially with larger language models, they are not always necessary or beneficial.

  19. import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; const { textStream } = streamText({ model: openai('gpt-4-turbo'), prompt: 'Write a poem about embedding models.', }); for await (const textPart of textStream) { console.log(textPart); }

  20. Agents:- When building AI applications, you often need systems that can understand context and take meaningful actions. When building these systems, the key consideration is finding the right balance between flexibility and control. Let's explore different approaches and patterns for building these systems. Single Step Agents 2. Multiple steps Agents 1. Lets Build AI Chat bot Using AI SDK Create Nextjs Application npm create next-app@latest my-ai-app Navigate to the” my-ai-app” And install all sdks npm install ai @ai-sdk/react @ai-sdk/openai zod If not installed in one go then install them one by one. Create file .env.local & put open ai key OPENAI_API_KEY=xxxxxxxxx

  21. Create a Route Handler Create a route handler, app/api/chat/route.ts and add the following code: import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; // Allow streaming responses up to 30 seconds export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o-mini'), messages, }); return result.toDataStreamResponse(); }

  22. Create UI: 'use client'; import { useChat } from '@ai-sdk/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return ( <div className="flex flex-col w-full max-w-md py-24 mx- auto stretch"> {messages.map(message => ( <div key={message.id} className="whitespace-pre- wrap"> {message.role === 'user' ? 'User: ' : 'AI: '}

  23. Run the APP npm run dev Results: Prompt: “What is AI SDK?”

More Related