Previously, a string like 'foo\'bar\'zoo' would make the collection process
think that "bar" is not inside a string because it wouldn't recognize that the
quotes are escaped. Now it does.
Fixes#143.
This change should fix the random hangs and segfaults when using the clang
completer. Also, assertion errors printed to the console on vim exit should go
away too, same thing with segfaults on vim exit. These "on exit" errors were
caused by not cleanly shutting down the background threads; both the identifier
completer and the clang one now join the threads on destruction. This results in
a clean shutdown.
The new clang completer architecture now uses only one clang thread (again)
instead of a completion and parsing thread. Since the parsing task needs to wait
on the completion task if it was started first (and vice-versa) there's no point
to using two threads. The desired "simplicity" of using two threads for these
two tasks actually created needless complexity (and bugs). Sigh. Such is life.
A TranslationUnit abstraction was also created and this in turn also reduces the
complexity of the clang completer.
The clang completer now also has some (very) basic tests.
We limit the number of candidates returned to Vim to 20 and also make sure that
we are not returning any duplicate candidates. This provides a noticeable
improvement in latency.
This removes the need for a special overload for AddCandidatesToDatabase. Also,
the GetFuture function now provides a more sensible API with the list being
returned instead of accepted as an out parameter.
llvm also has a copy of gtest in its source tree. This causes cmake to bork
since it sees several different targets with the same name (gtest and
gtest_main). So we have to rename our versions of gtest and gtest_main to
something else... We're just appending _ycm now.
This will cause pain when we want to update gtest in the future from upstream,
but I don't see a better way of handling this.
The problem was that should have been using a longest common subsequence
algorithm for the "number of word boundary character matches" calculation. Our
old approach would fail for the following case:
Query: "caafoo"
Candidate1 : "acaaCaaFooGxx"
Candidate2 : "aCaafoog"
Candidate1 needs to win. This is now also a test case.
The point is that we want to prefer candidates that have the query characters
"earlier" in their text, e.g. "xxabcxxx" over "xxxxxabc" for "abc" query.