PHP and LLMs Book — Why Use LLMs Anyways

Alfred Nutile
8 min readSep 6, 2024

--

Sign up for the newsletter to get updates about the “PHP and LLMs” book!

Or get a preview of the book at https://bit.ly/php_llms_sample

Purchase the book at https://bit.ly/php_llms

Big Picture View

An New Tool

What are LLMs, and why must I use them as a developer? This came up recently, and I realized this book did not touch on this at all. It just assumed this to be a known fact. But I want to take a moment to step out of the code and solutions to talk about why we as PHP developers need to really start considering this as we go about our day-to-day solving problems for customers and maybe even making our own products or hobby ideas.

Eight years ago, I did a machine learning and PHP video on YouTube, and it is still my most popular video. Back then, AWS made a service that started to make “Machine Learning” easy to host and create APIs around. For some reason, the potential to use this API to parse sentiment or tag content got my attention. But quickly, it faded because I still needed to know Machine Learning. I still needed to train models to do specific tasks.

But then, as we all know, OpenAI released an API that we could use like any API and get results. No training unless I wanted to and no Machine Learning expertise — just read the docs and throw your prompt at it. It was then that I realized that this could make my work more accessible and allow me to create things for myself and customers I could not even imagine doing before.

About two years ago, I heard about LangChain, a Python framework that enabled developers to build LLM-centric workflows and automation. It honestly got me worried.

> LangChain is a framework for developing applications powered by large language models (LLMs).

And it honestly had me thinking, “Do I need to move on from Laravel and PHP?”

But then, at the time, I realized that Laravel, as a framework, has some great foundation elements to make a system that allows for easy automation, including Task Scheduling, Batch Jobs, HTTP Facade, Process Facade, PostGres integration, and so much more. Of course, it is a fantastic foundation for building a web application using APIs like OpenAI, Ollama, and Claude to make the same things LangChain can build.

Two years later, after having built LaraChain.io, which became LaraLlama.io, and having generated numerous one-off ideas and used them to solve problems for customers, I still think Laravel and PHP have a solid place in this new chapter of Web and Application development.

But many people see it as nothing more than hype and a chatbot.

This book can help you understand how it can help you in your daily work.

• It is your pair coding colleague
• It is there to do grunt work for you
• And it is there to really help you try new ways to solve problems for your customers and yourself

It is your pair coding colleague

I use it in my IDE daily, and I use Claude or ChatGPT, thanks to which I continue to get more done.

If you lack a certain amount of skill, you will easily lead yourself down many dead ends. But that has always been true, from Stack Overflow to Googling.

One example is a GIS project I started a while back. I had no real sense of how to pull this off. I was handed some “shape” files, and that was it. But a few prompts later, I knew how to convert them to PostGres, set up the migrations, import the data, query it, and more. Again, being a lone freelancer, this was a big deal and a huge vote of confidence that I now had a tool at this assistant level.

Then there are the day-to-day examples I ask questions about ops, code, SQL queries, etc., that help solve the problem and even teach me how the solution works.

It is there to do the grunt work for you

This is another big one. Using tools like SuperMavin in my IDE or pasting schemas into Claude and asking it to make Factories for me are huge time savers. When I kick off a project or idea, it types out the factories, tests, and more, saving me so much tedious work.

To not use these tools, I would not have bothered with so many of the ideas and projects I have been working on for the past two years. And my customers are seeing more getting done than ever before.

It is there to help you try new ways to solve problems for your customers and yourself.

And with all the above, this is the most fun for me. The things I can do I could not have imagined before! From hobby ideas like having an LLM analysis, a video of me playing PickleBall, and giving me tips and ratings to a simple prompt that parses the data I need out of emails to help my client quickly opt out of users.

LaraLlama.io enabled me to parse emails from an email box using a prompt, scrape websites for event data, and turn them into event rows in the system.

Thanks to easy access to LLMs via APIs, many complex ideas are made simple.

Yes, it will take time to learn how to prompt or what information it needs to have with the prompt if you are building, say, a RAG system. And this takes a lot of time and failure. But over time, this is an excellent skill to build up. For example, asking for JSON was way less successful for me than asking for “Valid JSON”. It is these little details that can make for better results. Another one was me finding a YouTube video on prompting and trying its complex approach and having a miserable time. Then, after a lot of trial and error and other training, like on https://www.deeplearning.ai/, I realized how simple prompts can really be. Many of mine are just this:

<role>
You are an assistant helping parse websites data for Events.
<task>
Take the HTML data and text data and parse out events from that content. If there are no events on that day, just return an empty array. See the format below. All the event data is under the <context> section.
<format>
You will return a valid JSON of events. If there are no events, just return an empty array. Examples below:
[
{
"title": "Event Title Here",
"start_time": "Y-m-d h:i",
"summary": "summarize the information about it"
},
{
"title": "Event Title Here",
"start_time": "Y-m-d h:i",
"summary": "summarize the information about it"
}
]
or if there are not results just return
[]

<context>
All the event data will be here

Why Not LLMs

If we look at the question from this perspective, I can think of a few reasons that some people might suggest. One is privacy, which means giving those companies data from your book, code, etc. If you are worried about that, consider Ollama and running it on your machine. Consider making custom Local NativePHP apps to use this local LLM.

LLMs are Stealing Jobs

Then, there is fear of these LLMs stealing jobs from developers.

> “AI won’t steal your job, people leveraging AI will” — https://www.reddit.com/r/Futurology/comments/12axcom/ai_wont_steal_your_job_people_leveraging_ai_will/

This is true. However, it will also make it harder for junior developers, who may not always know what questions to ask of the system.

So, for now, start taking advantage of this tool to get your work done more efficiently, leaving you time and energy to tackle the harder or more complex problems.

Reliability

Reliability is another good concern people have. These LLMs are getting better and better. And LaraLlama.io will return results like “I can not find an answer for you using the information available.” Basically, I’ll not hallucinate or drift.

But 100% accuracy is not always possible. However, the importance of this depends on the use can and the options other than using it.

For example, if you summarize sales for the month, and it needs to be on the penny, then use the https://github.com/Flowframe/laravel-trend and enjoy. But then there was an issue of my getting events from web pages. Sometimes, it might miss an event, but was that as or more reliable than me trying to parse every div on the page or write a parser for all 20 websites? Heck yeah.

So, it depends on the problem you are trying to solve, the level of accuracy needed, and the options you have to solve that problem.

Testing

Another concern is how to test these systems, and there are a number of ways to do that. In this book, I will cover how you can use PEST or PHPUnit to mock requests to the API and get results that you can test against. But I think the more important consideration is the ongoing quality of results. This is a good question, and there are lots of ways to do this.

• You can run weekly live tests and have the results looked at by using another prompt to ask an LLM to compare the results to the known good results and see if it compares in quality.
• You can build validation prompts into your system, so before anything passes into the next step, like creating events or showing the user results, you pass it to Groq (which is very fast), and you have it review and fix the results as needed.
• And just manual QA using a pattern as talked about in https://www.deeplearning.ai/short-courses/automated-testing-llmops/ or https://www.deeplearning.ai/short-courses/improving-accuracy-of-llm-applications/

Future-proofing

Future-proofing is another excellent question as you dig into this, and two things there. First, just doing and learning, you will start to see and get ahead of the seemingly blazing-fast progression of this technology. And before long I know you will start to focus more on what matters and less on the hype. And second, this book will talk a lot about agnostic drivers, which is key. To only know OpenAi is a bad choice. You really need to be able to use different LLMs without writing new code.

Wrapping it up

Overall, this book helps you understand and simplify AI, enabling you to use it in your daily work. At the end of the book, I will share a list of resources that got me going and keep me updated with AI news. Also, there will be many chapters that are practical code examples and the GitHub repo https://github.com/alnutile/php-llms/ to help you get all the ideas working.

--

--

Alfred Nutile

LaraLlama.io Open-Source Product Created Day to day developer using Laravel, PHP, Vue.js, Inertia