gptel 
- Description
- Interact with ChatGPT or other LLMs
- Latest
- gptel-0.9.9.tar (.sig), 2025-Sep-03, 720 KiB
- Maintainer
- Karthik Chikmagalur <karthik.chikmagalur@gmail.com>
- Website
- https://github.com/karthink/gptel
- Browse ELPA's repository
- CGit or Gitweb
- Badge
To install this package from Emacs, use package-install
or list-packages
.
Full description
gptel is a simple Large Language Model chat client, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and in any buffer. gptel supports: - The services ChatGPT, Azure, Gemini, Anthropic AI, Together.ai, Perplexity, AI/ML API, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras, Github Models, GitHub Copilot chat, AWS Bedrock, Novita AI, xAI, Sambanova, Mistral Le Chat and Kagi (FastGPT & Summarizer). - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All Additionally, any LLM service (local or remote) that provides an OpenAI-compatible API is supported. Features: - Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever). - LLM responses are in Markdown or Org markup. - Supports conversations and multiple independent sessions. - Supports tool-use to equip LLMs with agentic capabilities. - Supports Model Context Protocol (MCP) integration using the mcp.el package. - Supports multi-modal models (send images, documents). - Supports "reasoning" content in LLM responses. - Save chats as regular Markdown/Org/Text files and resume them later. - You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. - Redirect prompts and responses easily - Rewrite, refactor or fill in regions in buffers. - Write your own commands for custom tasks with a simple API. Requirements for ChatGPT, Azure, Gemini or Kagi: - You need an appropriate API key. Set the variable `gptel-api-key' to the key or to a function of no arguments that returns the key. (It tries to use `auth-source' by default) ChatGPT is configured out of the box. For the other sources: - For Azure: define a gptel-backend with `gptel-make-azure', which see. - For Gemini: define a gptel-backend with `gptel-make-gemini', which see. - For Anthropic (Claude): define a gptel-backend with `gptel-make-anthropic', which see. - For AI/ML API, Together.ai, Anyscale, Groq, OpenRouter, DeepSeek, Cerebras or Github Models: define a gptel-backend with `gptel-make-openai', which see. - For PrivateGPT: define a backend with `gptel-make-privategpt', which see. - For Perplexity: define a backend with `gptel-make-perplexity', which see. - For Deepseek: define a backend with `gptel-make-deepseek', which see. - For Kagi: define a gptel-backend with `gptel-make-kagi', which see. For local models using Ollama, Llama.cpp or GPT4All: - The model has to be running on an accessible address (or localhost) - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all', which see. - Llama.cpp or Llamafiles: Define a gptel-backend with `gptel-make-openai'. Consult the package README for examples and more help with configuring backends. Usage: gptel can be used in any buffer or in a dedicated chat buffer. The interaction model is simple: Type in a query and the response will be inserted below. You can continue the conversation by typing below the response. To use this in any buffer: - Call `gptel-send' to send the buffer's text up to the cursor. Select a region to send only the region. - You can select previous prompts and responses to continue the conversation. - Call `gptel-send' with a prefix argument to access a menu where you can set your backend, model and other parameters, or to redirect the prompt/response. To use this in a dedicated buffer: - M-x gptel: Start a chat session. - In the chat session: Press `C-c RET' (`gptel-send') to send your prompt. Use a prefix argument (`C-u C-c RET') to access a menu. In this menu you can set chat parameters like the system directives, active backend or model, or choose to redirect the input or output elsewhere (such as to the kill ring or the echo area). - You can save this buffer to a file. When opening this file, turn on `gptel-mode' before editing it to restore the conversation state and continue chatting. - To include media files with your request, you can add them to the context (described next), or include them as links in Org or Markdown mode chat buffers. Sending media is disabled by default, you can turn it on globally via `gptel-track-media', or locally in a chat buffer via the header line. Include more context with requests: If you want to provide the LLM with more context, you can add arbitrary regions, buffers, files or directories to the query with `gptel-add'. To add text or media files, call `gptel-add' in Dired or use the dedicated `gptel-add-file'. You can also add context from gptel's menu instead (`gptel-send' with a prefix arg), as well as examine or modify context. When context is available, gptel will include it with each LLM query. LLM Tool use: gptel supports "tool calling" behavior, where LLMs can specify arguments with which to call provided "tools" (elisp functions). The results of running the tools are fed back to the LLM, giving it capabilities and knowledge beyond what is available out of the box. For example, tools can perform web searches or API lookups, modify files and directories, and so on. Tools can be specified via `gptel-make-tool', or obtained from other repositories, or from Model Context Protocol (MCP) servers using the mcp.el package. See the README for details. Tools can be included with LLM queries using gptel's menu, or from `gptel-tools'. Rewrite interface In any buffer: with a region selected, you can rewrite prose, refactor code or fill in the region. This is accessible via `gptel-rewrite', and also from the `gptel-send' menu. Presets Define a bundle of configuration (model, backend, system message, tools etc) as a "preset" that can be applied together, making it easy to switch between tasks in gptel. Presets can be saved and applied from gptel's transient menu. You can also include a cookie of the form "@preset-name" in the prompt to send a request with a preset applied. This feature works everywhere, but preset cookies are also fontified in chat buffers. gptel in Org mode: gptel offers a few extra conveniences in Org mode: - You can limit the conversation context to an Org heading with `gptel-org-set-topic'. - You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. See the variable `gptel-org-branching-context'. - You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command `gptel-org-set-properties'. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks. Finally, gptel offers a general purpose API for writing LLM ineractions that suit your workflow. See `gptel-request', and `gptel-fsm' for more advanced usage.
Old versions
gptel-0.9.8.5.tar.lz | 2025-Jun-11 | 126 KiB |
gptel-0.9.8.tar.lz | 2025-Mar-15 | 102 KiB |
gptel-0.9.7.tar.lz | 2024-Dec-05 | 76.5 KiB |
gptel-0.9.6.tar.lz | 2024-Oct-17 | 69.2 KiB |
gptel-0.9.5.tar.lz | 2024-Oct-12 | 66.8 KiB |
gptel-0.9.0.tar.lz | 2024-Jun-24 | 57.8 KiB |
gptel-0.8.6.tar.lz | 2024-May-02 | 49.3 KiB |
gptel-0.8.5.tar.lz | 2024-May-01 | 49.5 KiB |
News
# -*- mode: org; -*- * 0.9.9 2025-08-02 ** Breaking changes - The suffix =-latest= has been dropped from Grok models, as they are no longer required. So the models =grok-3-latest=, =grok-3-mini-latest= have been renamed to just =grok-3=, =grok-3-mini= and so on. - The models =gemini-exp-1206=, =gemini-2.5-pro-preview-03-25=, =gemini-2.5-pro-preview-05-06=, =gemini-2.5-flash-preview-04-17= have been removed from the default list of Gemini models. The first one is no longer available, and the others are superseded by their stable, non-preview versions. If required, you can add these models back to the Gemini backend in your personal configuration: #+begin_src emacs-lisp (push 'gemini-2.5-pro-preview-03-25 (gptel-backend-models (gptel-get-backend "Gemini"))) #+end_src ** New models and backends - Add support for ~grok-code-fast-1~. - Add support for ~gpt-5~, ~gpt-5-mini~ and ~gpt-5-nano~. - Add support for ~claude-opus-4-1-20250805~. - Add support for ~gemini-2.5-pro~, ~gemini-2.5-flash~, ~gemini-2.5-flash-lite-preview-06-17~. - Add support for Open WebUI. Open WebUI provides an OpenAI-compatible API, so the "support" is just a new section of the README with instructions. - Add support for Moonshot (Kimi), in a similar sense. - Add support for the AI/ML API, in a similar sense. - Add support for ~grok-4~. ** New features and UI changes - ~gptel-rewrite~ now no longer pops up a Transient menu. Instead, it reads a rewrite instruction and starts the rewrite immediately. This is intended to reduce the friction of using ~gptel-rewrite~. You can still bring up the Transient menu by pressing =M-RET= instead of =RET= when supplying the rewrite instruction. If no region is selected and there are pending rewrites, the rewrite menu is displayed. - ~gptel-rewrite~ will now produce more refined merge conflicts when using the merge action. It works by feeding the original and rewritten text to git (when it is available). - New command ~gptel-gh-login~ to authenticate with GitHub Copilot. The authentication step happens automatically when you use gptel, so invoking it manually is not required. But you can use this command to change accounts or refresh your login if required. - gptel now supports handling reasoning/thinking blocks in responses from xAI's Grok models. This is controlled by ~gptel-include-reasoning~, in the same way that it handles other APIs. - When including a file in the context, the abbreviated full path of the file is included is now included instead of the basename. Specifically, =/home/user/path/to/file= is included as =~/path/to/file=. This is to provide additional context for LLM actions, including tool-use in subsequent conversation turns. This applies to context included via ~gptel-add~ or as a link in a buffer. - Structured output support: ~gptel-request~ can now take an optional schema argument to constrain LLM output to the specified JSON schema. The JSON schema can be provided as - an elisp object, a nested plist structure. - A JSON schema serialized to a string. - A shorthand object/array description, described in the manual (and the documentation of ~gptel--dispatch-schema-type~.) This feature works with all major backends: OpenAI, Anthropic, Gemini, llama-cpp and Ollama. It is presently supported by some but not all "OpenAI-compatible API" providers. Note that this is only available via the ~gptel-request~ API, and currently unsupported by ~gptel-send~. - gptel's log buffer and logging settings are now accessible from gptel's Transient menu. To see these turn on the full interface by setting ~gptel-expert-commands~. - Presets: You can now specify ~:request-params~ (API-specific request parameters) in a preset. - From the dry-run inspector buffer, you can now copy the Curl command for the request. Like when continuing the query, the request is constructed from the contents of the buffer, which is editable. ... ...