By default, a ThreadPoolExecutor will wait at Python interpreter shutdown for
all the threads to stop by themselves before letting the interpreter shut down.
We don't want that for the network requests thread pool, it causes a shutdown
latency if there are outstanding requests. Killing the threads in our pool is
perfectly safe so we can avoid the latency by introducing an
UnsafeThreadPoolExecutor.
It appears that the issue comes from sending a None timeout to Requests. It
seems it's a bug in Requests/urllib3. So we just pick an arbitrary long timeout
of 30s as the default.
Vim still loves to block the main GUI thread on occasion when asking for
completions... to counteract this stupidity, we enforce a hard budget of 0.5s
for all completion requests. If the server doesn't respond by then (it should,
unless something really bad happened), we give up.
Now, every FileReadyToParse event returns diagnostics, if any. This is instead
of the previous system where the diagnostics were being fetched in a different
request (this caused race conditions).
There appear to be timing issues for the diag requests. Somehow, we're sending
out-of-date diagnostics and then not updating the UI when things change.
That needs to be fixed.
The problem was that when you start vim like "vim foo.cc", the FileReadyToParse
event is sent to the server before it's actually started up. Basically, a race
condition.
We _really_ don't want to miss that event. For C++ files, it tells the server to
start compiling the file.
So now PostDataToHandlerAsync in BaseRequest will retry the request 3 times
(with exponential backoff) before failing, thus giving the server time to boot.
These happen rarely and are not a big deal when they do. We still log them to
the Vim message area, but we don't annoy the user with the default, in-your-face
Python traceback.