PHP and Open-AI Assistant Threads, File-Upload and Messaging

Alfred Nutile
7 min readJan 7, 2024

This post dives into the unique aspects of using the OpenAi Assistant API for creating context-rich conversations with Language Learning Models (LLMs). Key takeaways include:

  • Creating an LLM Assistant: Setting up a foundational chat system with personality and context, applicable across various threads.
  • Starting Threads and Uploading Content: Focusing on specific topics, like a user-written book, and enhancing threads with relevant files for context.
  • Adding and Managing Messages: Implementing a system to track and update message threads, integrating existing chapters for continuity.
  • Utilizing File Uploads in Threads: Demonstrating the process with a practical example, attaching a CSV file to a thread and extracting useful information.
  • Handling File Outputs and Thread Status: Navigating the complexities of managing file outputs from the LLM and monitoring thread completion status.

The GitHub repo can be see here

This project is quite different from all the work I have done so far with the OpenAi API. It’s so different that I almost gave up halfway through, not liking the API. However, the Assistant API is a great way to do what I have been doing already, but now fully supported by OpenAi. It’s most likely a better system for creating “context” when chatting with the LLM (or any LLM, more on that later).

This post will show exactly how to use an Assistant (we will create it in the playground), then create a Thread around a specific topic — in this case, a “book” being written by the user — and lastly, how to deal with images and files from the LLM.

Creating the LLM Assistant

This is the foundation of the work. From here, we can create “threads” or conversations with context. But this forms the personality of the Chat System. You can give it instructions, files, and data that are relevant to ALL threads. However, you do not want to give it info about a particular thread. For this example, you do NOT want to upload a book here since each thread will be about the users’ book.

For this, we will use the UI that OpenAi provides at OpenAi Assistants.

Sure, you can make it in the API, but for this example, I was not so worried.

Item 1 is the assistant id we will use later when talking to the API.

The instructions are where you really have to do a lot of the work to make sure the assistant knows its job. Here, I can talk more about how it should act, for example, “Grumpy high school teacher,” or more.

The model is the one their docs explain to use, so we will start with that.

Item 4, we are not using, it is functions that you can add, like talking to your API and more.

Item 5 is the Code interpreter. This does not make much sense here, but I want to see if it helps with things like word count etc.

Item 6 is the big one, Retrieval. With this, we can provide files in the thread to help the assistant have more context. In this example, I will attach Markdown files, but in other situations, I have given it CSV files and more.

Ok, that was the easy part. From here, we are going to do the following:

  • Create a “Thread” around a book.
  • Upload all that book’s content to the thread.
  • Then start a message thread around that content.
  • With each Run (sending the messages to be replied to), we will watch for the results to be done.
  • And then get the reply from the Assistant.
  • Finally, if there are images, or files it gives us back, we will get those.

Again:

  1. Make Assistant
  2. Start a Thread with file(s) if needed
  3. Add Message(s)
  4. Run
  5. Watch Run Status
  6. Retrieve when done (reply or files or more)

Start a Thread

Here, we will get or start a thread around a “book”.

So now, using TinkerWell in this example, we start the thread and then get the Runs related to that thread. With those runs, we can get the latest one and see how it is doing.

We see the thread is “complete,” but there are other states it can be in, starting with “queued,” and I will talk after about how to watch for it to be done.

Adding a Message

Technically, we have messages already:

The start of the process, I added all previous Chapters so the assistant has context!

Here, I make sure the book has a thread (this is how I can come back to a thread at any time).

And here, when we do kick off this process, I give it any existing chapters. I could add a field to the chapter model called “uploaded” so I could then make sure to add any new chapters as they are added to the book.

Here we kick off the first real “message” the user will ask of the Assistant (so far it has all been setup).

We just use the OpenApi facade to add a message to the thread and then RUN the thread.

As seen in the image above, #3 shows the message is “queued,” so we need to keep an eye on it.

This can be tricky in a real application. Using the Laravel Queue, I just put a job in the queue that does this same code UNTIL the run is completed. Then, using Pusher or some other event system, I let the user know in the UI it is done.

Uploading Files

Ok, so far, the user nor the setup of the thread needed to upload a real file. In this example, Book two, we will attach a file to the thread that has some data. This book will be called “The Stupid Things I Buy on Amazon,” using a fake CSV file I had ChatGPT create, we will upload it then ask the LLM some info about it.

At this point, though, we are really pushing this example, so I will make a new assistant called:

Here, we make a new Assistant called Shopping Assistant. And just like before, I used the Playground to set up the assistant and get the ID. Then, when the user first called it, I uploaded a sample CSV file.

And then, once that Run is done, we are ready to use the data.

NOTE: These runs take time, so you really need to watch for them and then let the user know when the thread is ready.

So now, I can start asking questions about my Amazon (fake data) history:

After a while, we get a reply to our question! And you will see the role is “assistant.”

What about if we ask for a chart?

Ok, so this is where it gets tricky, how to get the file:

This ONE Assistant message has TWO content items in the reply, one the image_file, the other the text.

And we get the file:

That is about it. You can see there is a lot of detailed work:

  • Waiting for a run to be done.
  • Getting files.
  • Parsing “content” so you can return the “text” or get the “image_file.”
  • Saving enough state like “thread_id” or even “status.”

But it can make for a very succinct way to make context-based threads for your users.

I was already doing this a while back with Larachain.io, but I had to do a lot of the hard work, having a vector database and more.

The next thing, though, I want is to show the power of Ollama.ai and how with the right Laravel “driver,” we can talk to any LLM in the same way!

--

--