OpenAI Function Calling with PHP
OpenAI function calling is a special feature of OpenAI API that allows you to feed the model with additional data by executing your own functions. Although official documentation is well written and very clear, we had some troubles figuring out how the whole process works. Most of the code examples available online are usually written in Python but for the purpose of this article we shall be using PHP. In fact, as you'll see later language does not really matter.
Without further due, lets go!
OPENAI API
OpenAI API is a tool that can do a lot of things but its main strength is the ability to generate text response. In simple words, you can ask the model something, the model will try to understand what you ask it and it shall return a human-like response.
PROMPT EXPLAINED
The text that you send to the API is called PROMPT. There is nothing fancy here, prompt is just the text query that you send e.g. the actual question that you need answered. Prompt is really just a simple text but as any text it includes a few key aspects.
Parts of the prompt
- Instruction
A part of your prompt is what you tell the model to do e.g. ask a question, generate an image etc. Basically what you need it to do.
- Context
Just like humans the API needs some initial context. Context is essentially telling the model what perspective to use when digesting our prompt.
- Input data
In order to answer correctly, the model will need some data. It can use general knowledge but if your prompt includes a specific data, the model will focus on this data and base its response on it.
- Output requirements
Your prompt can also tell the model how to respond. For example it can return the response as text but in some cases, you might ask it to return the response as table or as chart.
CONTEXT EXPLAINED
Context is best explained using a few examples. For example, if you want to understand which engine oil is suitable for your diesel engine, a prompt may look like this:
prompt
What engine oil is best suitable for common rail engines?
The model would probably return very good answers. However, you can tell the model to construct the response from another perspective. Consider the following example.
prompt
You are mechanic specializing in maintaining diesel engines. What engine oil is best suitable for common rail engines?
In this case we're providing some context to the model. It now answers as if it was a real human mechanic. The answer might still look the same, but it will come with more details and some specifics that might not be available in the first prompt.
OPEN AI FUNCTION CALLING
OpenAI Function calling is a special feature of the API which allows you to allow the model to access custom data that is specific to your own use case. Models like OpenAI 4o use general knowledge and answer well but they do not have access to your local data and may not be able to provide a fine-grained results.
NOW PAY ATTENTION
This is where some developers have issues understanding function calling. Function calling allows you to include some functions with your prompt. OpenAI is NOT executing these functions, instead the model can decide if your prompt can't be answered directly and if some of the functions that you have included may provide additional details for constructing a response.
This means that the process is based on iterations.
- In the first iteration, you send the prompt and your functions, the model checks the prompt and decides that it can't answer directly and may use some of the functions that you have provided it with
- You get a response telling you that this specific function may provide more insights for the final answer
- You execute your own function in your own server/environment. The response from the function is the added to your initial response.
- Then a next iteration is done e.g. you send a new prompt to the API that now includes the intial prompt + the result from your own function
- You get final response that is based on your own specific data.
The 'magic' here is that the OpenAI API model decides what function may best fit response and it can not only ask you to execute it but it can also tell you what arguments to execute it with based on the initial prompt.
Lets show this in quick example. Consider the following OpenAI request. We'd like to ask GPT to count orders of specific customer from our database. If we pass this a simple text and ask the model directly e.g. Count orders from dev...per@an...ave.com, it will provide a generic response like this:
To determine the number of orders associated with the email address dev...er@ano...e.com, you can follow these steps within your Magento store's admin panel.
As you can see it's telling us how to do it but it doesn't do it directly because the model does not have access to specific data. By using OpenAI function calling, we can do much better. Below I have prepared an example of how to include functions in our prompt.
In the example shown above, we have included a specific function with our prompt. As you can see the function is defined by passing it's name, it's description and what parameters it can accept. Now pay attention again. The function description is also considered a prompt in the message queue. This means that it is really important to describe the function and tell the model what it does so it can decide if this function is suitable for better answering initial prompt.
The function itself is defined in our own PHP code as follows:
Now lets see how the request would look like. In the following example, we do an initial request with the intiial prompt and the function included in the request. The API will respond with a message saying that a function is likely suitable for improving response and the finnish_reason would be 'tool_calls' instead of 'stop'
The final reponse will now contain a fine grained and accurate result as follows
The customer with the email deve...er@anow...e.com has placed a total of 12 orders, spending a total of 356.12 EUR.
As you can see the model can now return exact result because it has decided to better answer the initial request, it can get detailed information by executing count_placed_orders() function. The model itself does not execute the function, on the first iteration it returns you a response that the function must be executed by passing the parsed email as argument. From then on you can execute the function, do some stuff locally and then have the function return detailed response.
Then the initial prompt is updated and the response from the function is added to the messages queue. OpenAPI model is then prompted again and now, it has detailed information that it can pass and understand.