Function Calling in Ruby: how to create intelligent AI Agents
Large Language Models (LLMs) are powerful, but, in general, they're limited to the knowledge in their training data. What if your AI needs to search your database, create structured itineraries or call other services? This is where Function Calling (also known as tool calling) comes in. It transforms your model into an intelligent agent that can interact with real-time data and services. As we see in the last post, the ruby_llm gem not only makes it easier to work across multiple AI providers (OpenAI, Claude, and Gemini), but it also greatly simplifies the implementation of function calling. To see this implementation in action and explore the details, be sure to check out the repository containing the code examples used in this article.
What is Function Calling?
Function calling allows LLMs to delegate tasks to external tools during conversations. Instead of just generating text, the LLM can:
- Detect when it needs information it doesn't have
- Call a function/tool with appropriate parameters
- Receive the result and incorporate it into its response
- Continue the conversation naturally with the new information
Think of it like giving your AI a toolbox. When asked "What's the weather in Paris?", the AI realizes it needs current data, calls your
WeatherTool, receives the data, and responds with accurate information.Basic Example
Before we build a complex agent, let's start with the simplest and most common pattern: External API Integration. The goal here is to give the LLM access to real-time data it wasn't trained on. This
WeatherTool will act as a bridge to a third-party API (which we'll simulate for this example). The key things to observe in this code are how we define the parameters (like city) that the LLM must extract from the user's text and how the execute method runs our internal logic. We return a structured response (like status: "success" and the data) that the LLM can understand and use to formulate its final answer.class WeatherTool def self.name "get_current_weather" end def self.description "Get the current weather for a specific city" end def self.parameters { type: "object", properties: { city: { type: "string", description: "The city name (e.g., 'Paris', 'Tokyo')" }, units: { type: "string", enum: ["celsius", "fahrenheit"], default: "celsius" } }, required: ["city"] } end def execute(city:, units: "celsius") # Simulate API call (use real weather API in production) weather_data = { "paris" => { temp: 18, condition: "Partly Cloudy" }, "tokyo" => { temp: 22, condition: "Clear" } } data = weather_data[city.downcase] if data { status: "success", city: city, temperature: data[:temp], condition: data[:condition], summary: "The weather in #{city} is #{data[:condition].downcase} with #{data[:temp]}°C" } else { status: "no_data", message: "Could not find weather data for '#{city}'" } end end end
That's it! Now wrap it for
ruby_llm:class RubyLLMWeatherTool < RubyLLM::Tool description "Get the current weather for a specific city" param :city, type: 'string', desc: "City name", required: true param :units, type: 'string', desc: "Temperature units", required: false def execute(city:, units: 'celsius') WeatherTool.new.execute(city: city, units: units) end end
Using Your Tool
# Create chat with tools chat = RubyLLM.chat.with_tools(RubyLLMWeatherTool.new) # Ask a question response = chat.ask("What's the weather in Paris?") puts response.content # => "The weather in Paris is currently partly cloudy with a temperature of 18°C..."
You might be wondering how the chat knew to call
RubyLLMWeatherTool just from the sentence "What's the weather in Paris?". This is a two-step process handled by the ruby_llm gem and the LLM provider's "Tool Use" API. When you call chat.ask(...), the gem first sends your prompt along with the structured descriptions and parameters of all available tools (like WeatherTool) to the LLM. The LLM then analyzes your prompt against these tool descriptions and determines a match. Instead of replying to you, it first responds to the gem with a special "tool call" request—essentially a JSON object instructing which tool to run and what arguments to use (e.g., city: 'Paris'). The gem receives this instruction, executes your local WeatherTool Ruby method to get the actual data, and then makes a second API call to the LLM. This second call includes the original prompt, the tool call instruction, and—most importantly—the data result from your tool. Now equipped with this new context, the LLM finally synthesizes the natural-language answer you see. You can see more in ruby_llm documentation.
Building a Travel Assistant
Let's build a more sophisticated system with multiple tools. We'll create three tools, each demonstrating a different pattern. The first one - Weather - we already saw this above, it demonstrates basic tool structure with validation and status-based responses.
Destination Search (Business Rules Enforcement)
This tool demonstrates a more advanced pattern: Enforcing Business Rules. The LLM itself knows nothing about your company's internal policies, such as "Gold members can visit these 10 cities, but Silver members can only visit 3. We should never let the LLM guess these rules. Instead, the tool acts as a "guarded entry point". The LLM's only job is to extract the necessary arguments (e.g.,
pass_tier: 'gold') from the user's request. Our code then takes over to run the actual business logic, ensuring the rules are applied correctly and consistently.class DestinationSearchTool def self.description "Search for travel destinations available with your SkyTraveler membership pass. "\\ "Check which destinations you can visit based on your pass tier (Silver, Gold, or Platinum)." end def execute(pass_tier:, query: "any", season: "any", limit: 10) # Validate pass tier unless ["silver", "gold", "platinum"].include?(pass_tier.downcase) return { status: "error", message: "Invalid pass tier" } end # Business rule: Get destinations based on pass tier available = get_destinations_by_pass(pass_tier.downcase) results = filter_destinations(available, query, season, limit) { status: "success", pass_tier: pass_tier.capitalize, total_available: available.size, showing: results.size, results: results } end private # --- Business logic implementation --- # (Constants and methods like get_destinations_by_pass, # filter_destinations, SILVER_DESTINATIONS, etc., # are defined here. View the full code in the repository.) # # def get_destinations_by_pass(tier) ... end # def filter_destinations(...) ... end end
This is an example of use:
chat = RubyLLM.chat.with_tools(DestinationSearchTool.new) response = chat.ask("What's the travel options to a Silver Pass?") puts response.content # => "With a Silver Pass, you can access various regional destinations including Mexico City and Cancun in Mexico, Toronto and Vancouver in Canada, Miami and San Francisco in the USA..."
Itinerary Builder (Structured Data Extraction)
This tool demonstrates two powerful concepts. The first is Structured Data Extraction. Sometimes, you don't want a natural language answer; you want the LLM to act as an intelligent parser that converts a user's request (e.g., "a 3-day trip to Tokyo") into a structured JSON object to you save in dabatase of process in a service. The second concept is the Halt Pattern. This tells the
ruby_llm wrapper to stop the process after the tool executes and return the raw JSON data directly to our application, skipping the final LLM synthesis step. This is perfect for populating a UI component, like a form or a schedule, rather than generating a simple chat response.class ItineraryBuilderTool # We define this custom class to act as a signal. # The RubyLLMItineraryTool wrapper (defined further down) # will check if the result is an instance of this class. If so, # it will call ruby_llm's `halt()` method. class HaltResult attr_reader :content def initialize(content) @content = content end end def execute(destination:, days:, pace: "moderate") # Validate return { status: "error", message: "Destination required" } if destination.empty? days = [[days.to_i, 1].max, 14].min # Build structured itinerary itinerary = { destination: destination, duration: "#{days} days", pace: pace, daily_schedule: build_daily_schedule(destination, days, pace) } # Halt pattern: return data immediately, skip LLM synthesis halt(itinerary) end private def halt(data) HaltResult.new({ status: "success", itinerary: data }) end # --- Business logic implementation --- # (The build_daily_schedule method is defined here, # responsible for creating the day-by-day plan. # View the full code in the repository.) # # def build_daily_schedule(destination, days, pace) ... end end
Example of use:
chat = RubyLLM.chat.with_tools(ItineraryBuilderTool.new) response = chat.ask("Can you create a 7-day relaxed itinerary (interest in food) for Tokyo?") puts response.content # => "{"status":"success","itinerary":{"destination":"Tokyo","duration":"7 days","pace":"relaxed","interests":["culture","food"],"daily_schedule":[{"day":1,"title":"Arrival & Exploration","activities":[{"time":"09:00","name":"Breakfast at local café","duration":"1h","location":"Tokyo - TBD"},{"time":"10:30","name":"Visit main historical..."
Putting It All Together
Now, let's define the wrappers for our two more complex tools. Note that the
RubyLLMWeatherTool was already defined in the basic example aboveclass RubyLLMDestinationTool < RubyLLM::Tool description "Search SkyTraveler pass destinations by tier" param :pass_tier, type: 'string', required: true param :query, type: 'string', required: false def execute(pass_tier:, query: 'any') DestinationSearchTool.new.execute(pass_tier: pass_tier, query: query) end end class RubyLLMItineraryTool < RubyLLM::Tool description "Create day-by-day travel itinerary" param :destination, type: 'string', required: true param :days, type: 'integer', required: true def execute(destination:, days:) result = ItineraryBuilderTool.new.execute(destination: destination, days: days) # Handle halt result if result.is_a?(ItineraryBuilderTool::HaltResult) halt(result.content.to_json) else result end end end
With all wrappers defined (
Weather, Destination, and Itinerary), we can instantiate a chat that uses all of them:
chat = RubyLLM.chat.with_tools(
RubyLLMWeatherTool.new,
RubyLLMDestinationTool.new,
RubyLLMItineraryTool.new
)
response = chat.ask("I have a Gold pass. Where can I go in winter? Check the weather and create a 3-day itinerary.")
# The LLM will:
# 1. Call DestinationSearchTool with pass_tier="gold", season="winter"
# 2. Call WeatherTool for a suitable destination
# 3. Call ItineraryBuilderTool with the chosen destination
# 4. Synthesize everything into a coherent response
Advantages of this Approach
The advantages of this architecture go far beyond simple execution. The LLM intelligently handles automatic tool selection and can even perform multi-tool chaining, deciding which functions to call in sequence based purely on the conversation's context.
From an engineering perspective, this approach is highly robust. It provides type safety through schemas (preventing malformed inputs) and enables structured error handling via status-based responses, allowing the system to degrade gracefully. Furthermore, it promotes good design: tools are easily testable in isolation, and they empower the LLM to strictly enforce business logic rather than merely guessing at your internal rules. Finally, patterns like the Halt Pattern give you complete control over data extraction, allowing you to bypass LLM synthesis entirely when your application just needs raw, structured data.
When to Use (and When Not to Use) Function Calling
Function calling is the clear choice whenever your AI needs to break out of its static knowledge bubble and interact with the outside world. This includes accessing real-time data (like weather, stock prices, or live database queries), enforcing business rules (such as membership tiers or pricing logic), or taking action (like creating records, sending emails, or triggering workflows). It is also invaluable for extracting structured data from user text or searching private knowledge bases that the model was not trained on.
However, function calling introduces architectural overhead and latency. It is not the right solution for simple, self-contained conversations where the LLM's training data is sufficient, or for quick, one-off text generation tasks that don't require external context
Conclusion
Function calling with
ruby_llm transforms LLMs into intelligent agents. By implementing three simple patterns—API integration, business rules enforcement, and structured data extraction—you can build sophisticated AI systems that interact with real-world data and services. Some of the best practices:- Always Return Structured Responses: Use status indicators (success, error, no_data)
- Never Raise Exceptions: Return error objects so the LLM can handle them gracefully
- Validate Inputs: Even though schemas should catch issues, validate in your execute method
- Write Clear Descriptions: Help the LLM understand when to use each tool
- Keep Tools Fast: Execute in < 5 seconds when possible
-
Separate Concerns: Keep tool implementation separate from
ruby_llmwrapper - Use Halt Pattern: For structured data extraction without LLM synthesis