Multiple OpenAi Functions PHP / Laravel

tl;dr

Alfred Nutile
9 min readSep 2, 2023

In this example, we’ll demonstrate how to use three functions with a single request, enabling the LLM (OpenAI API) and my custom functions to retrieve content from a website, convert it to audio, and generate an image. We’ll explore using the API and Message history to seamlessly link these requests.

Btw I have a YouTube channel at https://www.youtube.com/@AlfredNutile/featured if you want to keep up on some of the videos I am making related to this content. Or like me if you rather read the info just follow me here on Medium.

Adding Functions to your OpenAi API Requests
Adding Functions to your OpenAi Api Requests

Introduction

First, we’ll examine our objectives, delve into the foundational building blocks, and then tie them all together.

Functions

The overarching aim here, besides learning to use the OpenAI API and other LLMs, is to tap into the powerful “Function Calling” feature (more details in the links below). Though I’ll be using the OpenAI API, I’ll refer to it as an LLM to emphasize its potential compatibility with other APIs, drawing parallels with how Laravel handles functionalities like “Storage”.

The LLM itself won’t directly execute a function. Instead, it analyzes the incoming question to determine if any associated functions can satisfy a segment of the query. Fascinatingly, the system can pick from multiple functions, and as our example will illustrate, multiple functions can be employed for a single query.

Here are the three functions I’ll employ:

  1. Get Content from URL: We’ll employ the Rapid API Scraper to fetch site content. While I could create this function, leveraging the Rapid API provides a convenient experimentation platform.
  2. Content to Speech: Using Rapid API again, we’ll convert content into an audio format. While the voice quality isn’t the best, it serves the purpose.
  3. Text to Image: We’ll use the OpenAI Image API to visualize the content, with customization options available for further exploration.

Message History

This represents a state stored in a database to provide context to the LLM both before and after questions are posed. This context helps the LLM determine whether it should continue from a previous message or conclude. I employ Laravel and a specific database model, which I’ll elaborate on, but you can opt for any system you prefer.

The structured question is dispatched in this format:

Starting JSON Request to OpenAi API
{
"model": "gpt-3.5-turbo-16k",
"messages": [
{
"role": "system",
"content": "Acting as the users assistant and using the following meta data included in the question please answer their question,\n Current Date is 2023202320232023-02-09 07:09\n Only use the functions you have been provided with if needed to help the user with this question:### End Meta Data ### \n\n\n"
},
{
"role": "user",
"content": "Get the content from this url then convert it to audio https:\/\/medium.com\/@alnutile\/suggestions-around-building-a-good-development-team-in-parallel-to-building-a-good-product-6dcc50b0a551\n\nand then make an image from the text \"House by the river\""
}
],
"temperature": 0.5,
"functions": [
{
"name": "content_to_voice",
"description": "Use the string of content to create a voice audio version of the content using an external service",
"parameters": {
"type": "object",
"properties": {
"content": {
"type": "string",
"description": "This is the content to convert to an audio file"
}
},
"required": [
"content"
]
}
},
{
"name": "text_to_image",
"description": "Create an image for the related content using a sentence that describes a scene in nature like 'Trees near river'",
"parameters": {
"type": "object",
"properties": {
"text_for_image": {
"type": "string",
"description": "A simple sentence to describe a nature scene for the image"
}
},
"required": [
"text_for_image"
]
}
},
{
"name": "get_content_from_url",
"description": "This allows a user to put a URL into the question and it will then return the content for you to continue on with processing",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "This is the URL that the user wants the system to get"
}
},
"required": [
"url"
]
}
}
]
}

Here, “messages” symbolize the history maintained in the database. The “system” offers a broader perspective to the LLM, guiding it to remain accurate and avoid drifting into speculative answers. I’ve configured the system to solely use the mentioned functions, and the role plays a pivotal part in this. The “system” sets the stage, and the “user” initiates the dialogue.

Following this, there’s the “functions” key, which encapsulates all the functions I’ve integrated via the UI. Each function defines its parameters and expected outcomes, which are crucial for the LLM to discern how and when to employ your function.

You might wonder, “Where does this function reside?”

It’s a valid query. Contrary to some perceptions that the JSON or function must be in Python, I register Helpers in the app/helpers.phpfile. Although dependency injection in the ‘AppServiceProvider’ is feasible, I’ve currently chosen this approach. You can add your own helpers.php file see link below.

How I put the function in the helpers.php

This aligns with the “name” in the functions JSON. So, when invoked, the specialized class I’ve created manages these function calls:

The Function calling class

So when the class I use that talks to the OpenAi API finds related functions it will then call this class.

When the response from OpenAi is a function_call stop

Since all of this happens in a Queue job I make sure Horizon is set to let this run for a bit. You do not want timeouts and race conditions (two queue jobs being triggered for the same job)

The key step here, and I could just do it in the class above but I do not is the actual Job class waits for the Message to be returned and if that message is role time “function” it dispatches the message again.

Sending the request in a Queue Job and then if a Function putting it back in the Queue

Notice too I am broadcasting using Pusher so the UI can get updates if I want it to like refresh when done etc.

Lastly the Message model has a concept of “parent_id” and this is null for the initial question from the user. But after that all the messages relate back to the parent Message. So what is going in the Queue again is the Parent. I will, to keep the size down only send the 5 or so latest messages.

Putting it all Together

In our illustrative example, the user posts a question:

Putting it all together

While the UI isn’t the focus, the core idea is attaching specific functions to the posed question. Rather than burdening the API with numerous functions, I select a few that correlate with the question being posed.

Then the backend will start this Message Thread/History and send the needed JSON to the API including functions.

As the history evolves, this data is continually forwarded to the API. Each function executed builds upon the previous one, ensuring the system doesn’t get trapped in a repetitive cycle.

As each message is processed, the system discerns the functions it has already employed. For instance, after processing content, it identifies the opportunity to convert it into audio, triggering the corresponding function.

Our cumulative data, combined with some mock data to keep the process efficient and affordable, appears as this huge amount of JSON by the end:


{
"model": "gpt-3.5-turbo-16k",
"messages": [
{
"role": "system",
"content": "Acting as the users assistant and using the following meta data included in the question please answer their question,\n Current Date is 2023202320232023-02-09 07:09\n Only use the functions you have been provided with if needed to help the user with this question:### End Meta Data ### \n\n\n"
},
{
"role": "user",
"content": "Get the content from this url then convert it to audio https:\/\/medium.com\/@alnutile\/suggestions-around-building-a-good-development-team-in-parallel-to-building-a-good-product-6dcc50b0a551\n\nand then make an image from the text \"House by the river\""
},
{
"role": "function",
"content": "https:\/\/oaidalleapiprodscus.blob.core.windows.net\/private\/org-ClL1biAi0m1pC2J2IV5C22TQ\/user-i08oJb4T3Lhnsh2yJsoErWJ4\/img-ijTeJHl4EUETLSCjFTNc0LXs.png?st=2023-09-02T18%3A27%3A27Z&se=2023-09-02T20%3A27%3A27Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image\/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-09-02T06%3A08%3A47Z&ske=2023-09-03T06%3A08%3A47Z&sks=b&skv=2021-08-06&sig=PP7VaTSuIxHzQXi%2B1EA3pDXPHpnRRvucmrWGBN4sc9I%3D",
"name": "text_to_image"
},
{
"role": "assistant",
"content": "{\"name\":\"text_to_image\",\"content\":{\"text_for_image\":\"House by the river\"}}"
},
{
"role": "function",
"content": "Perfection is Achieved Not When There Is Nothing More to Add, But When There Is Nothing Left to Take Away - Antoine de Saint-Exuper",
"name": "get_content_from_url"
},
{
"role": "assistant",
"content": "{\"name\":\"get_content_from_url\",\"content\":{\"url\":\"https:\\\/\\\/medium.com\\\/@alnutile\\\/suggestions-around-building-a-good-development-team-in-parallel-to-building-a-good-product-6dcc50b0a551\"}}"
},
{
"role": "function",
"content": "https:\/\/s3.eu-central-1.amazonaws.com\/tts-download\/44e644bc33580c66bd33751beb941c54.wav?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAZ3CYNLHHVKA7D7Z4%2F20230902%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20230902T185524Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=f196f78591584911a09543f775ad581918d5e6bbb55d49e94e051f98a7f82aaf",
"name": "content_to_voice"
}
],
"temperature": 0.5,
"functions": [
{
"name": "content_to_voice",
"description": "Use the string of content to create a voice audio version of the content using an external service",
"parameters": {
"type": "object",
"properties": {
"content": {
"type": "string",
"description": "This is the content to convert to an audio file"
}
},
"required": [
"content"
]
}
},
{
"name": "text_to_image",
"description": "Create an image for the related content using a sentence that describes a scene in nature like 'Trees near river'",
"parameters": {
"type": "object",
"properties": {
"text_for_image": {
"type": "string",
"description": "A simple sentence to describe a nature scene for the image"
}
},
"required": [
"text_for_image"
]
}
},
{
"name": "get_content_from_url",
"description": "This allows a user to put a URL into the question and it will then return the content for you to continue on with processing",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "This is the URL that the user wants the system to get"
}
},
"required": [
"url"
]
}
}
]
}

UI And Mocking Slow API Requests

Every class in this setup is tested and mocked so I move pretty quickly when focusing on one block or part of the process. And myfrequent use of Facades and Real-Time Facades facilitates easy mocking. During UI building and testing with Pusher, I prefer quick outcomes. Hence, I activate mocking for OpenAI, RapidAPI, and others by toggling it in the .env file. Additionally, I’ve included a section in the classes to verify this and provide the mocked data when enabled.

So if mocking is “on” it will just return what I put here. This saves a lot of time when I am trying to get the UI and Pusher looking and working right.

That’s it!

Links

Function Calling

Rapid API Scraper

Rapid API Larget Text To Speech

OpenAi Image API

OpenAi PHP

Add Your own Helpers File to Laravel

--

--