Laravel and OpenAI Chat-able CSV

Alfred Nutile
7 min readJun 10, 2023

--

tl;dr

LaraChain + Laravel enables you to establish a foundational Laravel install, upload a CSV file, vectorize it, and then formulate queries related to the data using OpenAI. Soon, this will also be possible with Palm2.

Here’s a link to the video of this article.

Part One of Two

In this segment, I will demonstrate the simplicity of crafting a chat-based UI with LaraChain. In part two, I’ll walk you through the code to explain how everything happens within Laravel.

Why LaraChain?

LaraChain is an open-source endeavor that drew inspiration from LangChain. As a Laravel Developer, I initially worried that this framework might render Laravel obsolete — a somewhat exaggerated fear, admittedly. However, I soon discovered that many of the appealing features of LangChain, such as chaining processes and running background processes, are actions I frequently perform in Laravel.

Python does indeed have plenty of integration tools that make it phenomenal for data handling and machine learning, but PHP can also accomplish a great deal. Furthermore, when PHP is lacking, it can be supplemented with Python through the Process library or a lambda-like function.

How to Make a CSV File Searchable

Assuming you’ve already set up the server (you can find the instructions in the documentation), you can begin by creating a project.

Add A Project

Once the project is added, you’ll be directed to the project dashboard, where you can assemble the “parts” needed for this chain.

Add a Source

Firstly, select a “Source”. Since we’re using a file source, choose the “File Upload Source” option at the bottom left.

After you’ve selected a source and hit the Play button (ideally, this process should start automatically, but remember that this is a work in progress and an open-source project — feel free to contribute).

The next step is to “Transform” the data.

These transformations are small, pluggable steps that prepare your data for the Large Language Model (LLM) we’ll be utilizing later.

For this example, we’ll add the “CsvTransformer”, since that matches our data type, followed by the “Vectorize Your Data” step. The latter is crucial for most sources, as it allows our Vector database to search the data.

Once you’ve completed these steps, your setup should look something like this:

The press Play which will run the Transformers in the order that you sorted them. (you can drag and drop them as needed)

Thanks to Laravel and Horizon, you can always monitor the status of the Transformations in Horizon.

After the CSV Transformation has converted the downloaded CSV file into rows in the “documents_chunks” table and the Embed Transformer has created a vector embed of the data using OpenAI, your data will be ready for searching. However, we need an Outbound mechanism to access the data and produce results.

Making the Outbound Chain

Now the Outbound interface, composed of chained “Response Types”. Throughout this workflow, chaining is essential because operations must run in a specific order. (naming is hard 🤔).

The final link will be the “Response Type Chat UI”, where we interact with an LLM (in our case, the OpenAI Chat API).

First (and you can always sort these after) we had the “Embed Question” since we need to vectorize 🔫 the users input so we can then look in the database (in this case we are using a vector driver for Postgres pgvector/pgvector: Open-source vector similarity search for Postgres (github.com)).

Then we do the search with the “Vector Search” Response Type. After that we “Trim the Text”. This one just uses some known patters to reduce the text down so the “token” size we send to the API is small. You can read more about that here.

Let’s stop and talk about that for a moment. Right now, and this will not be an issue in 1–2 years, we are limited by “tokens”. Here is a TL;DR from ChatGPT:

Tokens are the smallest units of text that AI models like GPT-4 understand. In English, a token can be as short as one character or as long as one word. For example, “ChatGPT” is one token, but “Chat GPT” is two tokens because of the space.

The number of tokens in an API call affects cost, response time, and whether the call works at all. As of my last update in September 2021, the maximum limit for an OpenAI API call was 4096 tokens. This includes both the input and output tokens.

For example, if you use 10 tokens in your prompt, you have up to 4086 tokens left for the model’s response. If a conversation has more tokens than the maximum, you’ll have to truncate, omit, or otherwise shrink your text until it fits. Remember that very long conversations are more likely to receive incomplete replies.

This “Trim Text” tries to strip out what it can while leaving the remaining text understandable by the LLM. You can read more about this here.

After that is “Combine Content”. This one just takes all the results from the Vector search and combines them into one big chunk for the LLM. This Response Type has a setting.

The default is 2000 (fixing that now)

Ok so now we have a chain that will get all the content ready for the last link in the chain “Response Type Chat UI”. It is here we talk to an LLM (in this case OpenAI Chat API). And that LLM needs a prompt, soon I will show a Few Shot prompt.

This prompt I asked ChatGPT for help to make:

Note in the prompt there are a number of “\n” to help separate the question from the context. This helped get over a bug in the PHPlibrary being used, so for now I would keep this. Since this is a chat we are using a message format that saves the chat in the manner needed to communicate with the system. (more on this in Part 2)

Ok the first question failed ☹️

But thanks to the Laravel eco system and the nice Log Viewer library we can see what went wrong.

btw I can clear the message history to start the conversation over.

And now we get some results.

As you can see, we’ve got a bit of polishing to do. The formatting needs some elbow grease and the answers aren’t always on the money, but we’re certainly on our way. The path to perfection will involve refining our prompts, finessing the output data (maybe by adding another Response Type to our toolbox), and perhaps remapping the CSV data to better focus on the columns we care about.

A crucial question that we need to address is what exactly we’re asking the database. When a query comes in, are we conducting a “search”, or are we looking for a “summary” of the data? Our system allows us to build a “Response Type” that consults the LLM and reacts accordingly. For instance, if we want a summary or a “Give me a TLDR” of the data, our Response Type could trigger a subsequent Response Type that fetches enough data to generate a comprehensive summary. LangChain has some intriguing ideas on this subject.

And that’s a wrap for this first part! Stay tuned for Part 2 where I’ll take you backstage to see how all the magic happens under the hood. And, just to keep you on your toes, we’ll be exploring how to feed data into the system using Webhooks and APIs, taking GitHub Pull Requests webhooks as our example.

--

--

Alfred Nutile

LaraLlama.io Open-Source Product Created Day to day developer using Laravel, PHP, Vue.js, Inertia