How To Fix Rust-Analyzer High CPU Usage In Neovim LSP
UPDATE 2024-02-02 It seems the issue was fixed last year, so the workaround from the article is no longer needed. However, I recently submitted a PR to fix the same issue when Git repos are used as dependencies: !2995.
I recently had to set up my development environment for a Rust project. I've been switching between Vim and Neovim for about 8 years now, and my dotfiles configuration has been more or less compatible with Neovim and Vim up until very recently.
I broke the compatibility after I installed the first Lua plugin: nvim-tree.lua because NERDTree
had some unresolved bugs. Even though writing Lua is a bit easier than Vimscript, I'm still a little bummed out that the ecosystem has split in two and will likely never merge back together since Vim's author and maintainer decided not to add Lua support. I can understand him, since Vimscript already has similar features, although it is trickier to write.
My previous configuration involved Asynchronous Lint Engine or ALE. I even submitted a bunch of PRs for ALE so it took a while to migrate to Neovim's built-in LSP support.
I decided to switch because I wanted to use autocomplete for Rust Analyzer, but the auto-import functionality didn't work unless the LSP client supported completionItem/resolve
functionality:
LSP and performance implications
The feature is enabled only if the LSP client supports LSP protocol version 3.16+ and reports the
additionalTextEdits
(case-sensitive) resolve client capability in its client capabilities. This way the server is able to defer the costly computations, doing them for a selected completion item only. For clients with no such support, all edits have to be calculated on the completion request, including the fuzzy search completion ones, which might be slow ergo the feature is automatically disabled.
Unfortunately neither Neovim's LSP client for Rust Analyzer did not support it out of the box. Actually it was even worse than ALE. Completion from LSPs that already provided text edits in the list of completions did not work. On the other hand, I really liked the way popover worked and how it was rendered. The configuration seemed much simpler and easier to extend than ALE.
I decided I'd try to implement a few hacks to add support for auto imports after completions. First I had to add the missing client capabilities:
1local completion_item_resolve_capabilities = vim.lsp.protocol.make_client_capabilities() 2 3completion_item_resolve_capabilities.textDocument.completion.completionItem = { 4 resolveSupport = { 5 properties = {"additionalTextEdits"} 6 } 7}
and then configure the LSP client:
1lspconfig.rust_analyzer.setup { 2 capabilities = completion_item_resolve_capabilities, 3}
I then found a Reddit thread where someone already did most of the work for me, but I still had to modify it since it didn't work out of the box:
1local function register_completion_item_resolve_callback(buf, client) 2 if vim.b[buf].lsp_resolve_callback_registered then 3 return 4 end 5 6 vim.b[buf].lsp_resolve_callback_registered = true 7 8 local resolve_provider = client.server_capabilities and 9 client.server_capabilities.completionProvider and 10 client.server_capabilities.completionProvider.resolveProvider 11 12 local offset_encoding = client.offset_encoding 13 14 vim.api.nvim_create_autocmd({"CompleteDone"}, { 15 group = vim.api.nvim_create_augroup(au_group, {clear = false}), 16 buffer = buf, 17 callback = function(_) 18 local completed_item = vim.v.completed_item 19 if not (completed_item and completed_item.user_data and 20 completed_item.user_data.nvim and completed_item.user_data.nvim.lsp and 21 completed_item.user_data.nvim.lsp.completion_item) then 22 return 23 end 24 25 local item = completed_item.user_data.nvim.lsp.completion_item 26 local bufnr = vim.api.nvim_get_current_buf() 27 28 -- Check if the item already has completions attached. 29 -- https://github.com/neovim/neovim/issues/12310#issuecomment-628269290 30 if item.additionalTextEdits and #item.additionalTextEdits > 0 then 31 vim.lsp.util.apply_text_edits(item.additionalTextEdits, bufnr, offset_encoding) 32 return 33 end 34 35 -- Check if the server supports resolving completions. 36 if not resolve_provider then 37 return 38 end 39 40 vim.lsp.buf_request(bufnr, "completionItem/resolve", item, function(err, result, _) 41 if err ~= nil then 42 return 43 end 44 45 if not result then 46 return 47 end 48 49 if not result.additionalTextEdits then 50 return 51 end 52 53 if #result.additionalTextEdits == 0 then 54 return 55 end 56 57 vim.lsp.util.apply_text_edits(result.additionalTextEdits, bufnr, offset_encoding) 58 end 59 ) 60 end, 61 }) 62end
I had to call this new function from the LspAttach
autocmd callback:
1vim.api.nvim_create_autocmd('LspAttach', { 2 group = vim.api.nvim_create_augroup(au_group, {}), 3 4 callback = function(ev) 5 local client = vim.lsp.get_client_by_id(ev.data.client_id) 6 7 if not client then 8 print("LspAttach event: no LSP client", ev.data.client_id) 9 return 10 11 register_completion_item_resolve_callback(ev.buf, client) 12 end, 13})
And that was it, now the auto-import functionality simply worked for both the LSPs that needed provided the necessary computations immediately like gopls
and the ones that required us to call completionItem/resolve
.
Little did I know that I'll soon run into performance issues, albeit totally unrelated to this...
I opened a project that had about 700 different dependencies, internal and external. Every time I opened this project it would take between 1-2 minutes for Rust-Analyzer to initialize. That's because Rust Analyzer doesn't persist any state so it has to index the whole project all over again on startup. This would have been fine had I not noticed high CPU usage once I started navigating across different crates using vim.lsp.buf.definition()
.
This high CPU usage seemed really weird as I'd be able to go to definition across multiple crates and then it would get randomly stuck: I was no longer able to go to definition until the CPU usage had settled down. I ended up enabling extra logging, both in Neovim and Rust Analyzer to try to figure what was going on.
A colleague of mine had shared their configuration which I later found to have most likely been extracted from here. It involved adding the root_dir
function which would return nil
whenever we tried to open a file that was not within Neovim's CWD. My simplified port of this looked like this:
1local function is_in_workspace(filename) 2 local workspace_dir = vim.fn.getcwd() 3 4 return vim.startswith(filename, workspace_dir) 5end 6 7lspconfig.rust_analyzer.setup { 8 root_dir = function(filename) 9 if not is_in_workspace(filename) then 10 return nil 11 end 12 13 return lspconfig.util.root_pattern("Cargo.lock")(startpath) 14 end, 15 capabilities = completion_item_resolve_capabilities, 16}
While this did solve the issue of high CPU usage, I quickly realized that navigation across external crates stopped working as soon as I'd open a definition outside of my workspace. I even installed VSCode to see if I'd be able to reproduce the same error there, but everything just seemed to work there so I decided to keep investigating.
I first enabled LSP logging in Neovim with this Lua line in my config:
1vim.lsp.set_log_level('trace')
To enable debug logs in Rust Analyzer I started Neovim using:
1env RA_LOG=lsp_server=debug nvim src/main.rs
I noticed a lot of Roots Scanned
progress messages in the logs, followed by Indexing
. And the number of indexed packages would be different every time. To make it even more confusing, it happened on files in packages that had clearly already been indexed the first time I opened the project. And I was still confused by the fact that it would get stuck randomly - every time it seemed like it was caused by a different go to definition call.
After a while I noticed a line in the logs that looked like this:
1{ jsonrpc = "2.0", method = "workspace/didChangeWorkspaceFolders", ... }
And the filename in the message was always from an external crate. After this line in the logs, I'd see ton of aforementioned Roots Scanned
progress messages, followed by Indexing
. And then it hit me: it was the indexing that was taking a ton of CPU and blocking the go go definition requests. Scanning of roots would always take a while, but it was not blocking - and this is why it always seemed like Rust Analyzer would randomly get stuck. This "stuckage" wasn't immediate, it happened with a delay so it was hard to trace it back.
An easy solution would be to set the root_dir
to my CWD or some hardcoded path, but I also wanted to be able to have separate workspaces for different projects in the same Neovim instance.
I eventually came up with this modification of root_dir
function:
1local function most_recent_root_dir(cur_bufnr) 2 local filetype = vim.bo[cur_bufnr].filetype 3 local buffers = {} 4 5 for _, bufnr in ipairs(vim.api.nvim_list_bufs()) do 6 if not (bufnr == cur_bufnr) and 7 vim.api.nvim_buf_is_loaded(bufnr) and 8 vim.bo[bufnr].filetype == filetype 9 then 10 local root_dir = vim.b[bufnr].lsp_root_dir 11 if root_dir then 12 table.insert(buffers, { 13 root_dir = root_dir, 14 lastused = vim.fn.getbufinfo(bufnr)[1].lastused, 15 }) 16 end 17 end 18 end 19 20 table.sort(buffers, function(a, b) 21 return a.lastused > b.lastused 22 end) 23 24 local item = buffers[1] 25 26 return item and item.root_dir 27end 28 29local function is_in_workspace(path) 30 local workspace_dir = vim.fn.getcwd() 31 return vim.startswith(path, workspace_dir) 32end 33 34local root_dir = function(filename, bufnr) 35 if not is_in_workspace(filename) then 36 return most_recent_root_dir(bufnr) 37 end 38 39 local root_dir = lspconfig.util.root_pattern("Cargo.lock")(filename) 40 41 vim.b[bufnr].lsp_root_dir = root_dir 42 43 return root_dir 44end 45 46lspconfig.rust_analyzer.setup { 47 root_dir = root_dir, 48 capabilities = completion_item_resolve_capabilities, 49}
The most_recent_root_dir
function is used as a workaround when jumping to definitions. If we jump to an external crate, the default implementation would add the whole library to the workspace as a new project, but that would result in Rust Analyzer doing duplicate work and analyzing the project all over again. Instead, we just find the most recent root_dir
from the most recent buffer with the same file type for which we'd already previously figured out the root_dir
. It has one flaw: if we switch to buffer that belongs to a different workspace after sending the go to definition request before the LSP server had a chance to respond, we'll mess up the root_dir
. But in practice this is pretty quick and I can live with that until a better fix is submitted to nvim-lspconfig
.
The root_dir
parameter is actually not a part of LSP spec, but it's something some clients have come up with in order to differentiate files belonging to different workspaces / LSP contexts. In nvim-lspconfig
it allows us to reuse the existing context for files in external crates since those files have already been analyzed.
After I figured out what the exact issue was, it became much easier to search the web for solutions. I found that this issue is already tracked in nvim-lspconfig
under Issue 2518.
My whole Neovim LSP configuration can be found here.
I hope this post will help someone else in case they run into the same issue.
Cheers! 🍻
P.S. I later found this neat plugin for displaying LSP progress logs. It would have probably taken less time to debug all of this if I had this plugin installed when I first ran into the high CPU issue.