Best Practices for LLM Tools or Function Calling for Oracle Developers
What are LLM tools?
#APEX 26.1 is around the corner, and one of the big features will be native LLM tools/function calling support.
LLMs can only output text. For agentic tasks the LLM needs a way to act on the outside world. For example, it may need to look up data in a database as additional context, write some data, or trigger an API process.
- You register tool definitions (name, description, JSON schema for params) up front.
- You send the user’s question + the tool list to the LLM.
- The LLM either answers, or replies with
{tool_name, arguments}. - Your code runs the tool and returns the result back to the LLM.
- Repeat steps 3–4 until the LLM produces a final answer.
Tools have been around for almost two years, which is ages in AI time. This feature was the reason I started UC AI (open-source PL/SQL AI framework). So there are already plenty of best practices and learnings available on how to use tools effectively, even though they will be new to the native APEX world.
Example
#The easiest example is letting an LLM access data it otherwise wouldn’t see. For example, querying our calendar.
In APEX 26.1 tools are created like plug-ins inside an app. In UC AI they are stored in a table.
We need to define the following attributes:
- Tool code: a unique identifier for the tool
- Description: a description of what the tool does
- Parameters: JSON schema defining the parameters that the tool accepts.
- Function call: the code that will be executed when the tool is called.
declare
l_schema json_object_t;
l_tool_id uc_ai_tools.id%type;
begin
-- Use https://www.united-codes.com/products/uc-ai/docs/other/json-schema/
-- Or ask an LLM to generate one
l_schema := json_object_t.parse('
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"date": {
"type": "string",
"format": "date",
"description": "Date to search for in the calendar, in YYYY-MM-DD format"
}
},
"required": ["date"]
}');
-- Create the tool
l_tool_id := uc_ai_tools_api.create_tool_from_schema(
p_tool_code => 'GET_CALENDAR_EVENTS',
p_description => 'Get calendar events for a specific date.',
-- UC AI evaluates this snippet when the LLM calls the tool;
-- :parameters binds the JSON arguments.
p_function_call => 'return my_pkg.get_calendar_events(:parameters);',
p_json_schema => l_schema,
p_tags => apex_t_varchar2('calendar')
);
commit;
end;
Then when we ask the LLM about our availability on Christmas Eve, it will notice that the GET_CALENDAR_EVENTS tool would be useful to answer the question. It will respond with something similar to this (depending on the provider):
"tool_calls": [
{
"id": "call_27sGV8SxJf5dp78lNq2dc01c",
"type": "function",
"function": {
"name": "GET_CALENDAR_EVENTS",
"arguments": "{\"date\":\"2026-12-24\"}"
}
}
]
The AI framework (UC AI, APEX 26.1) will then call the PL/SQL function my_pkg.get_calendar_events with the parameters {"date":"2026-12-24"} and send the results back to the LLM. The LLM can then use the results to answer the original question.
Remember, you can put anything in a tool that you can code. For example, for our calendar agent, we can let it create and update calendar entries, send out invitations to others via email, etc.
How to design your tools
##1: Scope
#Design your tools like you design your PL/SQL functions. Aim for a clear purpose: not too broad, not too specific. You want to give the LLM enough freedom to use the tool in different contexts but not so much freedom that it gets confused about when and how to use it.
- Don’t:
GET_CALENDAR_EVENTS_TODAY,GET_CALENDAR_EVENTS_FOR_TOMORROW,GET_CALENDAR_EVENTS_FOR_NEXT_WEEK - Do:
GET_CALENDAR_EVENTSwith a parameter for the date or date range.
#2: Descriptions and context
#If the LLM does not call a tool when you think it should, it probably does not understand the purpose of the tool or when to use it. Make sure to provide a clear description of what the tool does and when it should be used.
- Don’t: ‘Get calendar events.’
- Do: ‘Get calendar events for a specific date or date range. Use this tool when the user asks about their schedule or availability.’
It also helps to mention the tools in the system prompt. Make the LLM aware of the tools and their purpose upfront, and add examples of when and how to use them.
SYSTEM PROMPT:
==============
Before creating a new event, always check if it exists already by using the
GET_CALENDAR_EVENTS tool. For example if the user says "I want to create a
calendar entry for a meeting with John tomorrow at 3pm" you should...
#3: Return meaningful messages
#Keep in mind that the LLM does not know what is going on behind the scenes. If your tool call succeeds, you need to tell the LLM.
Let’s take a look at how the function behind a ADD_CALENDAR_EVENT tool could look. Note how it returns a message describing exactly what happened. If the LLM does not get a response back, it will not know whether the tool call succeeded or not.
FUNCTION add_calendar_event(p_parameters IN CLOB)
RETURN CLOB
IS
l_parameters json_object_t := json_object_t.parse(p_parameters);
l_start_time date :=
to_date(l_parameters.get_string('start_time'), 'YYYY-MM-DD"T"HH24:MI:SS');
l_end_time date :=
to_date(l_parameters.get_string('end_time'), 'YYYY-MM-DD"T"HH24:MI:SS');
l_title varchar2(255) := l_parameters.get_string('title');
BEGIN
calendar_api.add_event(
p_start_time => l_start_time,
p_end_time => l_end_time,
p_title => l_title
);
return 'Calendar event "' || l_title || '" added successfully from '
|| l_start_time || ' to ' || l_end_time || '.';
END;
Robustness
##4: Validation and error messages
#One great thing about tools is that even when something goes wrong, the agent can recover, call the tool again with the right parameters, and continue with its task. But for that to work, the LLM needs to know what the issue is. Again, it does not know the code, so technical error messages will not help.
- Validate all parameters
- Don’t raise exceptions; return meaningful error messages that the LLM can understand and act upon.
- Don’t include stack traces or technical details in the error messages; they won’t help the LLM (again, it does not know the code behind the tool).
FUNCTION add_calendar_event(p_parameters IN CLOB)
RETURN CLOB
IS
l_parameters json_object_t;
l_start_str varchar2(50);
l_end_str varchar2(50);
l_start_time date;
l_end_time date;
l_title varchar2(255);
BEGIN
-- 1. Parse JSON safely
BEGIN
l_parameters := json_object_t.parse(p_parameters);
EXCEPTION
WHEN OTHERS THEN
RETURN 'Error: parameters could not be parsed as JSON. '
|| 'Retry with a valid JSON object containing "start_time", "end_time", and "title".';
END;
-- 2. Read and check required fields
l_start_str := l_parameters.get_string('start_time');
l_end_str := l_parameters.get_string('end_time');
l_title := l_parameters.get_string('title');
IF l_start_str IS NULL THEN
RETURN 'Error: "start_time" is required (format: YYYY-MM-DDTHH24:MI:SS, e.g. 2026-12-24T15:00:00).';
END IF;
IF l_end_str IS NULL THEN
RETURN 'Error: "end_time" is required (format: YYYY-MM-DDTHH24:MI:SS, e.g. 2026-12-24T16:00:00).';
END IF;
IF l_title IS NULL OR length(trim(l_title)) = 0 THEN
RETURN 'Error: "title" is required and must not be empty.';
END IF;
IF length(l_title) > 255 THEN
RETURN 'Error: "title" is too long (max 255 characters, got ' || length(l_title) || ').';
END IF;
-- 3. Validate date formats with the offending value in the message
BEGIN
l_start_time := to_date(l_start_str, 'YYYY-MM-DD"T"HH24:MI:SS');
EXCEPTION
WHEN OTHERS THEN
RETURN 'Error: "start_time" value "' || l_start_str
|| '" is not in the expected format YYYY-MM-DDTHH24:MI:SS (e.g. 2026-12-24T15:00:00).';
END;
BEGIN
l_end_time := to_date(l_end_str, 'YYYY-MM-DD"T"HH24:MI:SS');
EXCEPTION
WHEN OTHERS THEN
RETURN 'Error: "end_time" value "' || l_end_str
|| '" is not in the expected format YYYY-MM-DDTHH24:MI:SS (e.g. 2026-12-24T16:00:00).';
END;
-- 4. Validate business rules
IF l_end_time <= l_start_time THEN
RETURN 'Error: "end_time" (' || l_end_str || ') must be after "start_time" (' || l_start_str || ').';
END IF;
-- 5. Execute, catching unexpected errors so the agent can react instead of crashing
BEGIN
calendar_api.add_event(
p_start_time => l_start_time,
p_end_time => l_end_time,
p_title => l_title
);
EXCEPTION
WHEN OTHERS THEN
RETURN 'Error: could not add calendar event. The event was not created. '
|| 'Reason: ' || sqlerrm || '. Please verify the parameters.';
END;
RETURN 'Calendar event "' || l_title || '" added successfully from '
|| l_start_str || ' to ' || l_end_str || '.';
END;
#5: Limit tool calls
#Theoretically, your LLM could get stuck in a loop of calling tools without ever returning a final response. This is probably a sign that something is wrong with your tool design or prompt, or that the model isn’t capable enough.
For that reason, it is a best practice to limit the number of tool calls. Let the agent fail instead of burning through your budget. With UC AI, there is a parameter to do so:
l_result := uc_ai.generate_text(
p_user_prompt => 'What is the email address of Jim?',
p_provider => UC_AI.c_provider_openai,
p_model => UC_AI_OPENAI.c_model_gpt_5,
p_max_tool_calls => 6
);
Performance & context
##6: Context is expensive
#Arguably the biggest takeaway from my two-day AI for Oracle developers workshop is that context is the scarcest resource in AI interactions. The more input an LLM has, the more expensive the calls, but importantly, the worse the LLM will perform. This is called “context rot” — just like humans, LLMs can be overwhelmed with too much information.
If you watch your coding agents closely, you can see that they don’t read whole files. They first use search tools to find the relevant parts and then only read line ranges. We can apply the same principle to our tools for fetching data.
- Don’t:
get_all_customers - Do:
search_customers(search term as parameter) - Do: Pagination (return the initial 10 rows and allow offset as a parameter for more)
Additionally, you can return Toon instead of JSON. Toon is a new data format optimized for LLMs: fewer input tokens, measurable performance gains. With UC AI you can just call uc_ai_toon.to_toon(l_json). Read my previous blog post about Toon for more info.
#7: Filter tools based on agent
#Context cost applies to tools too: it can also be overwhelming for an LLM to have access to too many tools. The easiest way to solve this is to only provide access to the relevant tools based on the task or agent. For example if you have a calendar agent, it does not need access to your customer database tools.
In UC AI you can tag your tools and then the framework will only expose the relevant tools to the agent based on these tags.
uc_ai.g_tool_tags := apex_t_varchar2('calendar');
If you still have too many tools, there are advanced techniques to let another LLM preselect which tools should be given. Anthropic has built a Tool Search Tool that lets LLMs discover tools on the fly.
Operations
##8: Logging
#If you build a good agent, people will start using it. Now you have a fleet of autonomous systems doing actions in your precious database. Without observability, you are blind.
Add logging so you can analyze how often they are used: by whom, and what goes wrong when they fail. This is crucial for debugging and improving your tools and agents.
Security
#Murphy’s law: “Anything that can go wrong will go wrong”.
The core value proposition of LLMs is also their biggest risk. Unlike your hand-written code, an LLM is not deterministic. This means you give up control and hope that it performs as expected.
Because of this, you need to plan ahead. All the tools you build could be used against you. The biggest risk is prompt injection, where a user can input something to overwrite the developers’ system instructions and, for example, ask the agent to do as much damage as possible using the available tools.
#9: Tools need access control
#In your APEX app it is normal to implement authorization schemes to only allow certain users to access certain features. It would be fatal if your tools were a door to bypass these restrictions.
Because the agent is not using the APEX app, you need to implement access control at the database layer.
For example, I am building an APEX-based enterprise document management system at United Codes. For such a system, it is crucial to make sure that only authorized users can access and modify the documents. At the database layer, Oracle offers features like VPD (Virtual Private Database) and the context API to implement row-level security.
Inside the PL/SQL API layer, I implement checks into every procedure to make sure that the current user is authorized to perform the action. Same with queries, all views are based on a macro that filters out any documents or folders that the user is not authorized to see.
A few weeks ago I built an agent that people can use to ask questions about their documents. It has a few tools to search, list, and read documents and folders. Because of my precautions, I can easily have the agent run under the current user’s privileges and be sure that it only sees documents the user is authorized to access.
If I had skipped those checks, a user could ask the agent to list and then extract data from documents they are not authorized to see.
#10: Sanitize your inputs
#First rule of AI club: do not let your agents generate and execute SQL statements. Second rule of AI club: do not let your agents generate and execute SQL statements.
LLMs can generate valid SQL statements. But there is a reason your coding agent asks you to review the generated code before executing it. It can make mistakes, it can get confused, and it can be manipulated by malicious inputs.
It also requires lots of context about the data model to come up with a valid SQL statement. The objectively better approach is to offer APIs (e.g., add_calendar_event, get_calendar_events) with parameters.
But also make sure that the parameters cannot be used against you:
- Bind parameters safely; don’t allow for SQL injection
- Don’t allow the LLM to pass parameters that would result in slow queries → vector for denial-of-service attack
#11: Guardrails
#You can make agent actions less harmful. Rather than letting the agent execute a hard delete, soft-delete by marking the row as deleted.
You could also implement a history table/audit layer where it would be easy to restore data if something goes wrong.
#12: Rate limiting
#Treat your tools like an outside API. If someone tries to call them over and over again, it is probably a sign of malicious intent. Stop them early rather than burning through your budget and waiting for the damage to be done.
#13: Loss of personal information / data leakage
#Always remember that LLMs need your data as input. If you (or the law) think that some of the data is sensitive and should not be shared with a third party, you have an issue.
You could pseudonymize or obfuscate data before sending it. Most of the time the LLM does not need to know real names, dates, etc. to do its job.
But the safest approach is to host an open model yourself (which can work offline). Luckily, many open models are super capable, almost as good as the frontier closed ones (GPT, Claude, Gemini, etc.). However, you need big hardware investments to be able to run them. UC AI supports using local models via Ollama.
Start small, iterate, and learn
#Studies indicate that most AI projects fail. LLMs are an incredible and powerful technology but not the answer for everything. Instead of blindly following the hype, start small, experiment, and learn. Build a simple tool, connect it to an LLM, and see how it performs. Then iterate and improve based on your learnings.
Also talk to your users. Are you actually solving a real problem for them? Do they even want to use an AI agent? Don’t be like Microsoft, which just slams AI into people’s operating systems without asking if they actually want it or need it.
Agents are definitely a double-edged sword. If they work, it feels like magic; if they don’t, they can make your day miserable. As much good as coding agents can do, they can also lead to you losing control over your codebase or even dropping a production database if you blindly trust them and give them the capability. Be aware of the operational risk and challenges of a fleet of autonomous agents unleashed in your system and make sure to have the right guardrails, monitoring, and logging in place.
AI is a tool like a hammer. Not everything is a nail.
Conclusion
#Tools are a core technology to build agentic AI systems. As these systems heavily rely on data, you, as a database engineer, are in a prime position to leverage this technology and stay relevant in the age of AI. Get your hands dirty, experiment with tools, build some agents, and learn by doing.
If you have other best practices or other opinions, please use the comment functionality below to share your thoughts.
If you like, check out my open-source PL/SQL AI framework, UC AI, where you can build your own tools and agents in PL/SQL and use them in APEX or any other application.
If you’re looking for training where you can get hands-on experience with building tools and agents, check out my AI for Oracle developers workshop. In two days you will learn the fundamentals of LLMs up to advanced techniques like long-term memory and multi-agent systems.