Skip to content
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -294,8 +294,13 @@ private static List<SemanticTokensEdit> computeEdits(int[] prev, int[] curr) {
* <p>
* При дельта-кодировании токены после точки вставки идентичны,
* кроме первого токена, у которого deltaLine смещён на lineOffset.
* При вставке текста без перевода строки (lineOffset == 0), первый токен
* может иметь смещённый deltaStart.
*/
private static int findSuffixMatchWithOffset(int[] prev, int[] curr, int firstDiffToken, int lineOffset, int tokenSize) {
final int DELTA_LINE_INDEX = 0;
final int DELTA_START_INDEX = 1;

int prevTokenCount = prev.length / tokenSize;
int currTokenCount = curr.length / tokenSize;

Expand All @@ -310,9 +315,13 @@ private static int findSuffixMatchWithOffset(int[] prev, int[] curr, int firstDi
int prevIdx = (prevTokenCount - 1 - i) * tokenSize;
int currIdx = (currTokenCount - 1 - i) * tokenSize;

// Сначала проверяем все поля кроме deltaLine
// Для граничного токена при inline-редактировании (lineOffset == 0)
// разрешаем различие в deltaStart
int firstFieldToCheck = (!foundBoundary && lineOffset == 0) ? DELTA_START_INDEX + 1 : DELTA_START_INDEX;

// Проверяем поля кроме deltaLine (и возможно deltaStart для граничного токена)
boolean otherFieldsMatch = true;
for (int j = 1; j < tokenSize; j++) {
for (int j = firstFieldToCheck; j < tokenSize; j++) {
Comment on lines +318 to +324
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There appears to be a logical inconsistency between the algorithm implementation and the test expectations.

The test expects that when inserting text inline (lineOffset=0), BOTH the "А" token and the ";" token should be part of the suffix match, even though "А" has a different deltaStart value (line 1401-1402 comment states "suffix match includes А and ;").

However, the algorithm only allows ONE boundary token. After finding the first token with a different deltaStart (the ";" token), it sets foundBoundary = true. Then for the next token ("А"), it sets firstFieldToCheck = DELTA_START_INDEX (line 320), which means it WILL check deltaStart. Since "А" also has a different deltaStart, the check at line 325 will fail, breaking the loop.

This means the algorithm would find suffixMatch = 1 (only ";"), but the test expects suffixMatch = 2 (both "А" and ";"), resulting in deleteCount = 0.

Either:

  1. The algorithm needs to be fixed to allow ALL tokens with matching deltaLine when lineOffset=0 (not just one boundary token), OR
  2. The test expectations are incorrect and should expect deleteCount > 0

Please verify which behavior is intended and correct either the implementation or the test.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot check the comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Проверил алгоритм - он работает корректно. Анализ бота неверен.

Фактическое поведение:

  • i=0: ";" токены: deltaStart одинаковы (1==1), НЕ граничный токен, suffixMatch=1
  • i=1: "А" токены: deltaStart различны (6≠2), ЭТО граничный токен, suffixMatch=2, foundBoundary=true
  • i=2: цикл прерывается из-за foundBoundary=true

Алгоритм находит suffixMatch=2 ("А" и ";"), что соответствует ожиданиям теста (deleteCount=0, insert 10 integers). Тест проходит успешно.

if (prev[prevIdx + j] != curr[currIdx + j]) {
otherFieldsMatch = false;
break;
Expand All @@ -324,12 +333,20 @@ private static int findSuffixMatchWithOffset(int[] prev, int[] curr, int firstDi
}

// Теперь проверяем deltaLine
int prevDeltaLine = prev[prevIdx];
int currDeltaLine = curr[currIdx];
int prevDeltaLine = prev[prevIdx + DELTA_LINE_INDEX];
int currDeltaLine = curr[currIdx + DELTA_LINE_INDEX];

if (prevDeltaLine == currDeltaLine) {
// Полное совпадение
// Полное совпадение (или совпадение с учётом deltaStart при inline-редактировании)
suffixMatch++;
// Если это был граничный токен при inline-редактировании, отмечаем его найденным
if (!foundBoundary && lineOffset == 0) {
int prevDeltaStart = prev[prevIdx + DELTA_START_INDEX];
int currDeltaStart = curr[currIdx + DELTA_START_INDEX];
if (prevDeltaStart != currDeltaStart) {
foundBoundary = true;
}
}
} else if (!foundBoundary && currDeltaLine - prevDeltaLine == lineOffset) {
// Граничный токен — deltaLine отличается ровно на lineOffset
suffixMatch++;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1341,6 +1341,65 @@ void deltaWithLineInsertedInMiddle_shouldReturnOptimalDelta() {
assertThat(editSize).isLessThan(originalDataSize);
}

@Test
void deltaWithTextInsertedOnSameLine_shouldReturnOptimalDelta() {
// given - simulate inserting text on the same line without line breaks
// This tests the case raised by @nixel2007: text insertion without newline
String bsl1 = """
Перем А;
""";

String bsl2 = """
Перем Новая, А;
""";

DocumentContext context1 = TestUtils.getDocumentContext(bsl1);
referenceIndexFiller.fill(context1);
TextDocumentIdentifier textDocId1 = TestUtils.getTextDocumentIdentifier(context1.getUri());
SemanticTokens tokens1 = provider.getSemanticTokensFull(context1, new SemanticTokensParams(textDocId1));

// Verify original tokens structure
var decoded1 = decode(tokens1.getData());
var expected1 = List.of(
new ExpectedToken(0, 0, 5, SemanticTokenTypes.Keyword, "Перем"),
new ExpectedToken(0, 6, 1, SemanticTokenTypes.Variable, SemanticTokenModifiers.Definition, "А"),
new ExpectedToken(0, 7, 1, SemanticTokenTypes.Operator, ";")
);
assertTokensMatch(decoded1, expected1);

DocumentContext context2 = TestUtils.getDocumentContext(context1.getUri(), bsl2);
referenceIndexFiller.fill(context2);
SemanticTokens tokens2 = provider.getSemanticTokensFull(context2, new SemanticTokensParams(textDocId1));

// Verify modified tokens structure
var decoded2 = decode(tokens2.getData());
var expected2 = List.of(
new ExpectedToken(0, 0, 5, SemanticTokenTypes.Keyword, "Перем"),
new ExpectedToken(0, 6, 5, SemanticTokenTypes.Variable, SemanticTokenModifiers.Definition, "Новая"),
new ExpectedToken(0, 11, 1, SemanticTokenTypes.Operator, ","),
new ExpectedToken(0, 13, 1, SemanticTokenTypes.Variable, SemanticTokenModifiers.Definition, "А"),
new ExpectedToken(0, 14, 1, SemanticTokenTypes.Operator, ";")
);
assertTokensMatch(decoded2, expected2);

// when
var deltaParams = new SemanticTokensDeltaParams(textDocId1, tokens1.getResultId());
var result = provider.getSemanticTokensFullDelta(context2, deltaParams);

// then - should return delta, not full tokens
assertThat(result.isRight()).isTrue();
var delta = result.getRight();
assertThat(delta.getEdits()).isNotEmpty();

// Verify the delta is computed correctly
// Since lineOffset=0 (no line change), the algorithm should detect this as an inline edit
// The "Перем" token should match as prefix, and ";" should match as suffix (though its deltaStart changes)
// The edit should be significantly smaller than sending all new tokens
int editSize = delta.getEdits().get(0).getDeleteCount() +
(delta.getEdits().get(0).getData() != null ? delta.getEdits().get(0).getData().size() : 0);
assertThat(editSize).isLessThan(tokens2.getData().size());
}

// endregion
}