What Are Tools in the Scope of LLMs and Why Are They So Important

Alfred Nutile
8 min read6 days ago

When diving into the world of OpenAI API and other large language models (LLMs), one thing I say a lot to new users is, “It’s all just text.” While this may sound simplistic, it captures an essential truth about how these advanced systems operate. At their core, tools and functions within LLMs are indeed composed of text. However, this text serves a highly dynamic role, allowing the LLM to perform complex tasks by leveraging various tools in conjunction with your prompts.

What are Tools/Functions

It’s all just text

These tools and functions act as extensions of the LLM’s capabilities. You provide the model with text that describes the tools and their rules, including parameters and usage guidelines. The LLM then decides whether it needs to employ any of these tools to fulfill your request. If it determines that a tool is necessary, it follows the specified rules to execute the tool and process the input. The results are then returned to the LLM, which uses this information to refine its response. This iterative process continues until the LLM can fully address your prompt or execute the desired action. In essence, it’s like having a highly intelligent assistant that knows when and how to use specialized software to achieve the best possible outcome for your queries.

Example: Non-Technical Illustration

To bring the concept of tools and functions to life, let’s consider a simple, non-technical example. Imagine I ask my son to hang a framed photograph on the wall. To help him with this task, I provide him with five tools: a hammer, a screwdriver, a leveler, pliers, and a monkey wrench. Next to each tool is an index card explaining its purpose and how it should be used.

He starts by reading each card. After reviewing them, he decides to use two tools: the hammer and the leveler. The hammer’s card instructs him to use a nail, while the leveler’s card provides guidance on how to ensure the photograph is perfectly straight.

Following these instructions, he first uses the leveler to mark where the nail should go. Then, he uses the hammer to drive the nail into the wall. Once the photograph is hung and perfectly leveled, he calls me down to inspect his work.

This example parallels how LLMs utilize tools and functions. The tools (hammer, leveler, etc.) are like the functions you can provide to an LLM, each with its own set of instructions. The LLM, much like my son, decides which tools are necessary based on the prompt and uses them in the appropriate sequence to achieve the desired outcome.

Now a Real World Example

Let’s delve into a practical, real-world scenario to illustrate the power of tools and functions within LLMs. Suppose I need to send a weekly update email summarizing all articles about LaraLlama to a list of business owners. Here’s how the process unfolds with the help of an LLM.

First, I provide the LLM with a prompt asking it to summarize all articles from this week about LaraLlama and email the summary to a specified list of business owners. Along with this prompt, I also supply the LLM with a set of registered tools it can use to complete the task.

The registered tools include:

• Query Documents based on Date Range
• Email Results to a List of Emails
• Get the Weather
• Search and Summarize
• Summarize
• Standards Checker

Upon receiving the prompt, the LLM evaluates which tools are necessary. For this task, it determines that it needs the “Query Documents based on Date Range”, “Summarize”, and “Email Given Addresses” tools. Each tool is defined with a specific name, like “query_documents_based_on_date_range”, and includes a description and parameters, such as “start_date” and “end_date”.

The LLM first uses the “Query Documents based on Date Range” tool to gather all relevant articles about LaraLlama from the specified week. Next, it uses the “Summarize” tool to condense the information into a digestible format. Finally, it employs the “Email Given Addresses” tool to send the summary to the provided list of email addresses.

Throughout this process, the LLM continuously evaluates the results against the initial prompt, ensuring that each step is completed accurately. This iterative approach allows the LLM to fulfill complex tasks efficiently, much like a highly skilled assistant that can dynamically decide which specialized tools to use to achieve the best outcome.

Why is this So Important?

Tools and Functions are key to Agentic Systems

Understanding tools and functions is crucial for appreciating the capabilities of agentic systems like LaraLlama. At a basic level, tools enable LaraLlama to answer prompts by utilizing predefined functions. However, the true power lies in defining these tools and allowing the system to autonomously determine how to use them to address the prompt effectively.

Let’s revisit our previous example and take it a step further. Imagine the Orchestrator LLM, which initially received the prompt, not only determines which tools to use but also initiates two parallel processes to gather results. This multitasking ability significantly enhances efficiency and accuracy.

https://www.reddit.com/r/seinfeld/comments/f2ublx/one_of_my_favorite_characters_the_maestro/

Once the parallel processes complete their tasks, the LLM employs a “Quality Agent” to compare the results, ensuring the best possible outcome is selected. This quality check ensures that the information provided is accurate and reliable.

Next, the “Marketing Review” agent steps in to ensure the summary reads well and aligns with company standards. This agent refines the output, making it suitable for the intended business audience.

Finally, the “CRM Agent” adds valuable context based on data from the CRM database, tailoring the email to the specific needs and preferences of the target recipients. This level of personalization enhances the relevance and impact of the communication.

By orchestrating multiple tools and agents, the LLM transforms from a simple response system into a sophisticated agent capable of executing complex tasks with high precision and quality. This orchestration is what makes tools and functions so essential, driving the evolution of intelligent, autonomous systems.

As a Programmer I can just build this what is the big deal!

As a programmer, you might think, “I can just build this myself, so what’s the big deal?” The answer lies in the flexibility and efficiency that comes with using tools and functions in a system like LaraLlama.

Yes, but!

• Minimizing Code Complexity: Building tools that can be dynamically ordered and utilized by the system reduces the need for extensive coding to cover every possible use case. This modular approach means you can create individual functions or tools that are highly reusable and adaptable to various scenarios.
• Natural Language Interaction: One of the most significant advantages is that users can interact with these systems using normal language. Instead of writing complex code or commands, users can simply ask the system to perform a task or a series of tasks in plain text.

Ultimately, it all comes down to text. This text-based approach allows for a highly flexible and user-friendly system where the heavy lifting is done by the LLM, deciding the best way to use the tools at its disposal to achieve the desired outcome. This not only streamlines the development process but also opens up advanced functionalities to a broader audience, including those without deep technical expertise.

We can build just the blocks that can then be uses as needed in so many different use cases

Show me the Code!

Let’s see how this works at the Code Level for LLMs that do not have Tools ability

Lots of if else logic I had to write. Lots of user choices in the UI, eg This is a Search, this needs Emails, This needs Images made etc.

Ok Let’s Fake it a bit better

Fist lets see what a functions are in a JSON format

  • It is just JSON/Text
  • These are the tools registered in LaraLlama
  • You can image an “email” tool or a “query_by_date_range”, “make_image”, “search_the_web”
  • The snake_case calls a Class in the application

So First We Ask the Not Tool Enabled LLM Which Tool to use

Below shows how we can ask a non Tool enabled LLM what to do with the prompt. Notice we use a command array pattern for chatting with these LLMs, and we has in the content with is the key, and description of the Tools

Then, if all goes well

  • we get back “summarize_collection”
  • We call that function
  • We take the results and give the final LLM the results and the original prompt to wrap things up for the user
  • That’s it we got the Non Tool Enabled LLM to decide on a tool to use then we passed that to the Tool Class to do the rest and return the results back for the next step.

Ok What About Tools Enabled LLMs!

OpenAi, Claude and Gemini and others.

This Complex way we did above

Can now become!

We let the LLM tell us which tools are available. If it requests any tools, we pass the prompt to each tool and build up the history in our message array for the final output to the user. Alternatively, there might be no UI, and these tasks could be scheduled jobs that run at night to email, gather data, etc. It doesn’t matter — the LLM can handle it all seamlessly.

Bringing It All Together

Hopefully, you now see how crucial tools are to the future of building LLM-centric systems. One of the biggest breakthroughs is that non-developers can use tools like LaraLlama.io to create automations and complex systems using just plain language — no code required!

Links

Claude’s Api Docs

OpenAi Api Docs

https://platform.openai.com/docs/guides/function-calling

Docs.LaraLlama.io Add Functions

Medium Post — by Me!

--

--

Alfred Nutile

LaraLlama.io Open-Source Product Created Day to day developer using Laravel, PHP, Vue.js, Inertia