Using Agents and Tools
IMPORTANT
Not all LLMs support function calling and the use of tools. Please see the compatibility section for more information.
As outlined by Andrew Ng in Agentic Design Patterns Part 3, Tool Use, LLMs can act as agents by leveraging external tools. Andrew notes some common examples such as web searching or code execution that have obvious benefits when using LLMs.
In the plugin, tools are simply context and actions that are shared with an LLM via a system
prompt. The LLM can act as an agent by requesting tools via the chat buffer which in turn orchestrates their use within Neovim. Agents and tools can be added as a participant to the chat buffer by using the @
key.
IMPORTANT
The agentic use of some tools in the plugin results in you, the developer, acting as the human-in-the-loop and approving their use.
How Tools Work
Tools make use of an LLM's function calling ability. All tools in CodeCompanion follow OpenAI's function calling specification, here.
When a tool is added to the chat buffer, the LLM is instructured by the plugin to return a structured JSON schema which has been defined for each tool. The chat buffer parses the LLMs response and detects the tool use before triggering the agent/init.lua file. The agent triggers off a series of events, which sees tool's added to a queue and sequentially worked with their putput being shared back to the LLM via the chat buffer. Depending on the tool, flags may be inserted on the chat buffer for later processing.
An outline of the architecture can be seen here.
Community Tools
There is also a thriving ecosystem of user created tools:
- VectorCode - A code repository indexing tool to supercharge your LLM experience
- mcphub.nvim - A powerful Neovim plugin for managing MCP (Model Context Protocol) servers
The section of the discussion forums which is dedicated to user created tools can be found here.
@cmd_runner
The @cmd_runner tool enables an LLM to execute commands on your machine, subject to your authorization. For example:
Can you use the @cmd_runner tool to run my test suite with `pytest`?
Use the @cmd_runner tool to install any missing libraries in my project
Some commands do not write any data to stdout which means the plugin can't pass the output of the execution to the LLM. When this occurs, the tool will instead share the exit code.
The LLM is specifically instructed to detect if you're running a test suite, and if so, to insert a flag in its request. This is then detected and the outcome of the test is stored in the corresponding flag on the chat buffer. This makes it ideal for workflows to hook into.
@editor
The @editor tool enables an LLM to modify the code in a Neovim buffer. If a buffer's content has been shared with the LLM then the tool can be used to add, edit or delete specific lines. Consider pinning or watching a buffer to avoid manually re-sending a buffer's content to the LLM:
Use the @editor tool refactor the code in #buffer{watch}
Can you apply the suggested changes to the buffer with the @editor tool?
@files
NOTE
All file operations require approval from the user before they're executed
The @files tool leverages the Plenary.Path module to enable an LLM to perform various file operations on the user's disk:
- Creating a file
- Reading a file
- Editing a file
- Deleting a file
@web_search
The @web_search tool enables an LLM to search the web for a specific query. This can be useful to supplement an LLMs knowledge cut off date with more up to date information.
Can you use the @web_search tool to tell me the latest version of Neovim?
@full_stack_dev
The plugin enables tools to be grouped together. The @full_stack_dev agent is a combination of the @cmd_runner, @editor and @files tools:
Let's use the @full_stack_dev tools to create a new app
Approvals
Some tools, such as the @cmd_runner, require the user to approve any actions before they can be executed. If the tool requires this a vim.fn.confirm
dialog will prompt you for a response.
Useful Tips
Combining Tools
Consider combining tools for complex tasks:
@full_stack_dev I want to play Snake. Can you create the game for me in Python and install any packages you need. Let's save it to ~/Code/Snake. When you've finished writing it, can you open it so I can play?
Automatic Tool Mode
The plugin allows you to run tools on autopilot. This automatically approves any tool use instead of prompting the user, disables any diffs, and automatically saves any buffers that the agent has edited. Simply set the global variable vim.g.codecompanion_auto_tool_mode
to enable this or set it to nil
to undo this. Alternatively, the keymap gta
will toggle the feature whist from the chat buffer.
Compatibility
Below is the tool use status of various adapters and models in CodeCompanion:
Adapter | Model | Supported | Notes |
---|---|---|---|
Anthropic | claude-3-opus-20240229 | ✅ | |
Anthropic | claude-3-5-haiku-20241022 | ✅ | |
Anthropic | claude-3-5-sonnet-20241022 | ✅ | |
Anthropic | claude-3-7-sonnet-20250219 | ✅ | |
Copilot | gpt-4o | ✅ | |
Copilot | gpt-4.1 | ✅ | |
Copilot | o1 | ✅ | |
Copilot | o3-mini | ✅ | |
Copilot | o4-mini | ✅ | |
Copilot | claude-3-5-sonnet | ✅ | |
Copilot | claude-3-7-sonnet | ✅ | |
Copilot | claude-3-7-sonnet-thought | ❌ | Doesn't support function calling |
Copilot | gemini-2.0-flash-001 | ❌ | |
Copilot | gemini-2.5-pro | ✅ | |
DeepSeek | deepseek-chat | ✅ | |
DeepSeek | deepseek-reasoner | ❌ | Doesn't support function calling |
Gemini | Gemini-2.0-flash | ✅ | |
Gemini | Gemini-2.5-pro-exp-03-25 | ✅ | |
Gemini | gemini-2.5-flash-preview | ✅ | |
GitHub Models | All | ❌ | Not supported yet |
Huggingface | All | ❌ | Not supported yet |
Mistral | All | ❌ | Not supported yet |
Novita | All | ❌ | Not supported yet |
Ollama | All | ❌ | Is currently broken |
OpenAI Compatible | All | ❗ | Dependent on the model and provider |
OpenAI | gpt-3.5-turbo | ✅ | |
OpenAI | gpt-4.1 | ✅ | |
OpenAI | gpt-4 | ✅ | |
OpenAI | gpt-4o | ✅ | |
OpenAI | gpt-4o-mini | ✅ | |
OpenAI | o1-2024-12-17 | ✅ | |
OpenAI | o1-mini-2024-09-12 | ❌ | Doesn't support function calling |
OpenAI | o3-mini-2025-01-31 | ✅ | |
xAI | All | ❌ | Not supported yet |