Local LLM and Laravel in 10 minutes with Local LLM Embedding for free
Getting started is easy.
Setup
Download Ollama https://ollama.com/download and get it running.
You should be able to connect to it like this
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?"
}'
If you get this far then your local LLM is running great!
Using HERD you can easily get going in Laravel https://herd.laravel.com/
Talking to Ollama
<?php
namespace LlmLaraHub\LlmDriver;
use Illuminate\Contracts\Container\BindingResolutionException;
use Illuminate\Support\Facades\Http;
use Illuminate\Support\Facades\Log;
use Laravel\Pennant\Feature;
use LlmLaraHub\LlmDriver\Requests\MessageInDto;
use LlmLaraHub\LlmDriver\Responses\CompletionResponse;
use LlmLaraHub\LlmDriver\Responses\EmbeddingsResponseDto;
class OllamaClient extends BaseClient
{
/**
* @see https://github.com/ollama/ollama/blob/main/docs/api.md
*/
public function completion(string $prompt): string
{
$response = $this->getClient()->post('/generate', [
'model' => 'llama3',
'prompt' => $prompt,
'stream' => false,
]);
return $response->json()['response'];
}
protected function getClient()
{
$api_token = 'ollama';
$baseUrl = 'http://127.0.0.1:11434/api/';
return Http::withHeaders([
'content-type' => 'application/json',
])
->timeout(120)
->baseUrl($baseUrl);
}
}
You will then be able to talk to the local llm.
If You want to Vectorize data
If you do not have the professional license for HERD you will need to install PostGres (super easy) using https://postgresapp.com/
Once done you are set to connect your Laravel to it. First make the database using your tool of choice I like TablePlus https://tableplus.com/
NOTE: Laravel can autocreate the database but with PostGres I seem to never get it to work. Oh well.
Update your .env to talk to PostGres. For example
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=larachain
DB_USERNAME=postgres
DB_PASSWORD=password
Create a model called Document model or whatever and add the columns role and content.
I make several fields to save data to. See https://github.com/LlmLaraHub/llmlarahub/blob/main/database/migrations/2024_04_09_142357_add_alternative_embeddings_sizes_to_document_chunks.php for an example. (more on this in a moment)
Make sure to run CREATE EXTENSION vector; in PostGres ui or command line.
Then make a new method in the OllamaClient add an embedding method.
public function embedData(string $prompt): array
{
$response = $this->getClient()->post('/embeddings', [
'model' => 'mxbai-embed-large',
'prompt' => $prompt,
]);
$results = $response->json();
return data_get($results, 'embedding', []);
}
>NOTE: You need to download the model ollama pull MODEL_NAME
`
Now any data you get out of this you can save to a table that uses the https://github.com/pgvector/pgvector-php library.
Then when a question comes in you embed that as well and search the database like this:
$results = DocumentChunk::query()
->join('documents', 'documents.id', '=', 'document_chunks.document_id')
->selectRaw(
"document_chunks.{$embeddingSize} <-> ? as distance, document_chunks.content, document_chunks.{$embeddingSize} as embedding, document_chunks.id as id",
[$embedding->embedding]
)
->where('documents.collection_id', $chat->chatable->id)
->limit(5)
->orderByRaw('distance')
->get();
NOTE: $embeddingSize since different models have embedding sizes this can vary so I make several fields to save data to. See https://github.com/LlmLaraHub/llmlarahub/blob/main/database/migrations/2024_04_09_142357_add_alternative_embeddings_sizes_to_document_chunks.php for an example.
So you are using the embedded prompt (from the user) to search the data.
And that is it, you can start saving the state of your conversations, you can start storing data that you embed for free with local Ollama.
Want to get started more easily use LaraChain! https://larachain.io/
Just download and setup like above with PostGres and use your LocalLLM or OpenAi or Claude see more here https://youtu.be/rj5YQLbWF9U?si=YyC1Jb5DT4jceU_D