[Bug] Conversations created without a workspace silently disappear from Chat History — data saved but index not updated

1. What I Found — The Problem

After using Antigravity for several months, I discovered that 13 out of 66 conversations were completely invisible in the Chat History UI. These conversations had their .pb data files fully saved in ~/.gemini/antigravity/conversations/, but they never appeared in any history list — not under any workspace, not under “Show more”, nowhere.

The affected conversations spanned from December 2025 to March 2026, meaning this is a long-standing, systematic issue — not a one-time glitch.

Data inconsistency:

  • Total conversation .pb files on disk: 66
  • Conversations visible in Chat History: 53
  • Conversations silently missing: 13 (20%)

2. How I Encountered It — The Scenario

Here’s the exact sequence that led me to discover this issue:

  1. I opened the Agent Manager and started a new conversation (without selecting a workspace)
  2. I had a productive, multi-turn conversation about Antigravity Skills — the AI generated detailed documentation for me
  3. I then accidentally clicked the resource configurator, which opened a folder
  4. I went back to Chat History to continue my Skills conversation
  5. The conversation was gone. Not under any workspace, not in “Show more”, completely vanished.

At first I thought I had accidentally deleted it. But after investigation, I realized the conversation data was fully intact on disk — it just wasn’t indexed.

3. Root Cause Investigation — What’s Actually Happening

I dug into Antigravity’s internal data structures and found the following:

Where Chat History lives

The Chat History UI does not read from the conversations/ directory directly. Instead, it reads from an index called trajectorySummaries, stored as base64-encoded protobuf data in a SQLite database:

~/Library/Application Support/Antigravity/User/globalStorage/state.vscdb
    → Table: ItemTable
    → Key: antigravityUnifiedStateSync.trajectorySummaries

The index structure (reverse-engineered)

message TrajectorySummaries {
  repeated TrajectoryEntry entries = 1;
}

message TrajectoryEntry {
  string conversation_id = 1;
  TrajectoryData data = 2;       // contains base64-encoded inner message
}

message TrajectoryInner {
  string title = 1;
  int32 step_count = 2;
  Timestamp created_at = 3;
  string session_id = 4;
  int32 status = 5;
  Timestamp last_modified = 7;
  WorkspaceInfo workspace = 9;   // ← KEY FIELD
  Timestamp reference_time = 10;
  string extra = 15;
  int32 aux_step_count = 16;
}

The bug

When a conversation is created without a workspace (field 9 is absent), the conversation .pb file is correctly written to disk, but no corresponding entry is added to the trajectorySummaries index. Since the UI reads exclusively from this index, the conversation becomes invisible.

This is a data consistency bug: the write path for conversation data and the write path for the index are not synchronized when no workspace is bound.

4. How I Fixed It — The Workaround

I wrote a Python script (fix_trajectory.py) that:

  1. Backs up the SQLite database before any changes
  2. Scans the conversations/ directory for all .pb files
  3. Compares against the UUIDs found in the trajectorySummaries data
  4. Constructs proper protobuf entries for the missing conversations (using low-level varint/length-delimited encoding)
  5. Appends the new entries to the existing index data
  6. Validates the result using protoc --decode_raw
  7. Writes the updated index back to the database

After running the script and restarting Antigravity, all 13 previously invisible conversations appeared in Chat History. The conversations were fully intact with complete chat history, generated artifacts, and all metadata.

Did it work? — Yes, completely.

All missing conversations were successfully restored. The fix has been stable — no conversations have disappeared again since the repair. The data was always there; only the index was broken.

5. Is This the Right Fix? — No, It Shouldn’t Be Necessary.

My script is a workaround, not a proper solution. Here’s why:

  • Requires technical expertise — Users need to understand SQLite, protobuf encoding, and Antigravity internals
  • Risky — Direct database manipulation could corrupt data if done incorrectly
  • Not discoverable — Regular users would never know their conversations still exist on disk
  • Not preventive — The script only repairs past damage; new workspace-less conversations will still be lost

What Antigravity should do instead:

  1. Index all conversations — Every conversation must be written to trajectorySummaries regardless of workspace association. Conversations without a workspace should be indexed under a “General” or “No Workspace” category.

  2. Startup consistency check — On launch, compare .pb files in conversations/ against trajectorySummaries entries. Auto-repair any discrepancies.

  3. Prevent the gap — If workspace binding is truly required for indexing, warn the user: “This conversation is not associated with a workspace and may not appear in your history.”

  4. Never silently lose user data — This is the most important principle. Even if the UI filtering is intentional, making conversations completely unfindable with no recovery path is a serious UX failure.

Environment

  • OS: macOS (Apple Silicon)
  • Antigravity Version: Latest as of March 2026
  • Mode: Agent Manager
  • Database path: ~/Library/Application Support/Antigravity/User/globalStorage/state.vscdb
  • Conversations path: ~/.gemini/antigravity/conversations/

Summary

A 20% conversation loss rate is severe. The data is saved but the index is not updated, creating a silent data inconsistency. Users cannot discover this problem on their own, and there is no built-in recovery mechanism. This needs to be fixed at the application level — both preventing future occurrences and repairing existing damage automatically.

3 Likes

I think I’m having the same problem you did, but it seems like you figured out how to fix it.

The conversations are actually still there and can be searched for. But for some unknown reason, whenever I open the project through the .code-workspace file, the chats won’t save to the corresponding project. Opening the folder directly works fine. This issue seems to have started from version 1.18.

My multi-root / cross-directory projects are now really frustrating. Every time I open one, I can’t see the previous conversations at all — I have to search for the conversation ID and then @ the ID to make the IDE remember what was said before.

You’re a lifesaver! Following your instructions, I had Antigravity rescan all the .pb files, and it found 25 conversations that were missing index entries. They were successfully repaired and reappeared in my conversation history. But it also clearly warned me that creating conversations from these multi-directory workspace files will still trigger the same issue next time, because this is a stupid bug.

1 Like

I too am also being severely impacted by this bug. I am having conversations regularly vanish from the Agent Manager UI but I can find timestamped .pb files under conversations/.

@Brevin is your repair script fix_trajectory.py in a state where you could share it?

1 Like

Hi! @Ben_Hutchison
Yes, I’ve cleaned up the script into a universal version that should work for anyone. Here it is:

How to use:

  1. Completely quit Antigravity (Cmd+Q on macOS, not just close the window)
  2. Save the script below as fix_antigravity_history.py
  3. Run in your system terminal: python3 fix_antigravity_history.py
  4. Reopen Antigravity — your recovered conversations should appear in Chat History

The script will:

  • :white_check_mark: Automatically detect your OS and find the correct database path (macOS/Linux/Windows)
  • :white_check_mark: Scan all .pb files in ~/.gemini/antigravity/conversations/
  • :white_check_mark: Compare against the trajectorySummaries index in the SQLite database
  • :white_check_mark: Show you exactly which conversations are missing
  • :white_check_mark: Ask for confirmation before making any changes
  • :white_check_mark: Back up your database before modifying anything
  • :white_check_mark: Validate the protobuf format after the repair

Important: Make sure Antigravity is completely closed before running the script. If it’s still running, the app will overwrite your changes when it exits.

If anything goes wrong, the script prints a restore command you can use to revert to the backup.

I just tested it on my machine — it found and recovered 4 new missing conversations that had disappeared since my last fix. So this bug is definitely still happening with every workspace-less conversation.

#!/usr/bin/env python3
"""
Antigravity Chat History Index Repair Tool (Universal Version)

Bug: Conversations created without a workspace are not written to the
trajectorySummaries index, making them invisible in the Chat History UI.
The conversation .pb files are fully saved on disk — only the index is missing.

This script automatically detects and repairs the missing index entries.

Related forum post:
https://discuss.ai.google.dev/t/bug-conversations-created-without-a-workspace-silently-disappear-from-chat-history-data-saved-but-index-not-updated/135008

Usage:
  1. Completely quit Antigravity (Cmd+Q on macOS)
  2. Run in system terminal: python3 fix_antigravity_history.py
  3. Reopen Antigravity
"""

import base64
import sqlite3
import os
import re
import shutil
import subprocess
import sys
import time
import datetime
import platform


# ===== Low-level Protobuf Encoding Utilities =====

def encode_varint(value):
    """Encode a varint (variable-length integer encoding)."""
    result = []
    while value > 0x7f:
        result.append((value & 0x7f) | 0x80)
        value >>= 7
    result.append(value & 0x7f)
    return bytes(result)


def encode_field_varint(field_number, value):
    """Encode a varint-type protobuf field."""
    tag = (field_number << 3) | 0  # wire type 0 = varint
    return encode_varint(tag) + encode_varint(value)


def encode_field_bytes(field_number, data):
    """Encode a length-delimited protobuf field (for bytes/string/embedded message)."""
    if isinstance(data, str):
        data = data.encode('utf-8')
    tag = (field_number << 3) | 2  # wire type 2 = length-delimited
    return encode_varint(tag) + encode_varint(len(data)) + data


def encode_timestamp_message(field_number, seconds, nanos=0):
    """Encode a nested Timestamp message (field 1=seconds, field 2=nanos)."""
    inner = encode_field_varint(1, seconds)
    if nanos:
        inner = inner + encode_field_varint(2, nanos)
    return encode_field_bytes(field_number, inner)


def build_trajectory_inner(title, step_count, created_seconds, created_nanos,
                            session_id, status, last_modified_seconds,
                            last_modified_nanos):
    """
    Build the inner protobuf data for a trajectory entry.
    Structure reverse-engineered from existing conversation entries:
      field 1: string title
      field 2: varint step_count
      field 3: Timestamp created_at
      field 4: string session_id
      field 5: varint status (1=completed)
      field 7: Timestamp last_modified
      field 10: Timestamp (reference time, usually same as created_at)
      field 15: string (empty)
      field 16: varint (auxiliary step count)
    """
    data = b''
    data += encode_field_bytes(1, title)
    data += encode_field_varint(2, step_count)
    data += encode_timestamp_message(3, created_seconds, created_nanos)
    data += encode_field_bytes(4, session_id)
    data += encode_field_varint(5, status)
    data += encode_timestamp_message(7, last_modified_seconds, last_modified_nanos)
    # NOTE: field 9 (workspace info) is intentionally skipped,
    # because these conversations have no workspace association
    data += encode_timestamp_message(10, created_seconds, created_nanos)
    data += encode_field_bytes(15, "")
    data += encode_field_varint(16, max(1, step_count - 8))
    return data


def build_trajectory_entry(conversation_id, inner_data):
    """
    Build the outer trajectory entry.
    Format: field 1=conversation_id, field 2={ field 1=base64(inner_data) }
    """
    b64_inner = base64.b64encode(inner_data).decode('ascii')
    field_2_inner = encode_field_bytes(1, b64_inner)
    entry = encode_field_bytes(1, conversation_id)
    entry += encode_field_bytes(2, field_2_inner)
    return entry


def wrap_as_outer_entry(entry_data):
    """Wrap as a top-level repeated field 1 message."""
    return encode_field_bytes(1, entry_data)


# ===== Path Detection =====

def get_db_path():
    """Detect the Antigravity database path based on the operating system."""
    system = platform.system()
    if system == 'Darwin':  # macOS
        return os.path.expanduser(
            '~/Library/Application Support/Antigravity/User/globalStorage/state.vscdb'
        )
    elif system == 'Linux':
        return os.path.expanduser(
            '~/.config/Antigravity/User/globalStorage/state.vscdb'
        )
    elif system == 'Windows':
        appdata = os.environ.get('APPDATA', '')
        return os.path.join(appdata, 'Antigravity', 'User', 'globalStorage', 'state.vscdb')
    else:
        print(f"Warning: Unknown OS: {system}. Trying macOS path...")
        return os.path.expanduser(
            '~/Library/Application Support/Antigravity/User/globalStorage/state.vscdb'
        )


def get_conversations_dir():
    """Detect the conversations directory path."""
    return os.path.expanduser('~/.gemini/antigravity/conversations')


# ===== Auto-Detection of Missing Conversations =====

def get_file_conversations(conversations_dir):
    """
    Scan the conversations directory and return a dict of
    conversation_id -> file_metadata for all .pb files.
    """
    conversations = {}
    if not os.path.isdir(conversations_dir):
        return conversations

    for filename in os.listdir(conversations_dir):
        if filename.endswith('.pb'):
            conv_id = filename[:-3]  # remove .pb extension
            # Validate UUID format
            if re.match(r'^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$', conv_id):
                filepath = os.path.join(conversations_dir, filename)
                stat = os.stat(filepath)
                conversations[conv_id] = {
                    'id': conv_id,
                    'filepath': filepath,
                    'size': stat.st_size,
                    'mtime': stat.st_mtime,
                    'ctime': getattr(stat, 'st_birthtime', stat.st_ctime),
                }
    return conversations


def get_indexed_uuids(existing_data):
    """Extract all UUIDs from the existing trajectorySummaries data."""
    return set(
        u.decode() for u in re.findall(
            b'[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}',
            existing_data
        )
    )


def estimate_step_count(file_size):
    """
    Estimate the step count based on file size.
    Rough heuristic: larger files generally have more conversation turns.
    """
    if file_size < 10000:
        return 3
    elif file_size < 50000:
        return 5
    elif file_size < 200000:
        return 10
    elif file_size < 500000:
        return 15
    elif file_size < 1000000:
        return 20
    else:
        return 30


# ===== Process Check =====

def check_antigravity_running():
    """Check if Antigravity is currently running."""
    system = platform.system()
    try:
        if system == 'Darwin' or system == 'Linux':
            result = subprocess.run(
                ['pgrep', '-f', 'Antigravity'],
                capture_output=True, text=True
            )
            return result.returncode == 0
        elif system == 'Windows':
            result = subprocess.run(
                ['tasklist', '/FI', 'IMAGENAME eq Antigravity.exe'],
                capture_output=True, text=True
            )
            return 'Antigravity.exe' in result.stdout
    except Exception:
        pass
    return False


# ===== Main Logic =====

def main():
    print("=" * 60)
    print("  Antigravity Chat History Index Repair Tool")
    print("  (Universal Version)")
    print("=" * 60)
    print()

    # 1. Locate paths
    db_path = get_db_path()
    conversations_dir = get_conversations_dir()

    print(f"Database: {db_path}")
    print(f"Conversations: {conversations_dir}")
    print()

    if not os.path.exists(db_path):
        print("ERROR: Antigravity database file not found.")
        print(f"   Expected at: {db_path}")
        print("   Make sure Antigravity is installed and has been used at least once.")
        return False

    if not os.path.isdir(conversations_dir):
        print("ERROR: Conversations directory not found.")
        print(f"   Expected at: {conversations_dir}")
        return False

    # 2. Check if Antigravity is running
    if check_antigravity_running():
        print("WARNING: Antigravity appears to be running!")
        print("   Please completely quit the Antigravity app first (Cmd+Q on macOS).")
        print("   If you modify the database while Antigravity is running,")
        print("   the changes may be overwritten when the app exits.")
        response = input("\n   Continue anyway? (y/N): ").strip().lower()
        if response != 'y':
            print("   Cancelled.")
            return False
        print()

    # 3. Scan conversation files
    file_conversations = get_file_conversations(conversations_dir)
    print(f"Conversation files on disk: {len(file_conversations)}")

    # 4. Read existing trajectory summaries
    conn = sqlite3.connect(db_path)
    cursor = conn.cursor()
    cursor.execute(
        "SELECT value FROM ItemTable WHERE key = 'antigravityUnifiedStateSync.trajectorySummaries'"
    )
    row = cursor.fetchone()
    if not row:
        print("ERROR: trajectorySummaries data not found in database.")
        print("   This might mean you have a different version of Antigravity.")
        conn.close()
        return False

    existing_b64 = row[0]
    existing_data = base64.b64decode(existing_b64)
    existing_uuids = get_indexed_uuids(existing_data)
    print(f"Conversations in index: {len(existing_uuids)}")

    # 5. Find missing conversations
    missing = {}
    for conv_id, meta in file_conversations.items():
        if conv_id not in existing_uuids:
            missing[conv_id] = meta

    if not missing:
        print("\nNo missing conversations found! All conversations are properly indexed.")
        conn.close()
        return True

    print(f"\nFound {len(missing)} conversation(s) missing from the index:\n")
    for conv_id, meta in sorted(missing.items(), key=lambda x: x[1]['mtime']):
        dt = datetime.datetime.fromtimestamp(meta['mtime'])
        size_kb = meta['size'] / 1024
        print(f"   - {conv_id}  |  {dt.strftime('%Y-%m-%d %H:%M')}  |  {size_kb:.1f} KB")

    # 6. Ask for confirmation
    print(f"\nThis will add {len(missing)} conversation(s) to the history index.")
    response = input("   Proceed? (y/N): ").strip().lower()
    if response != 'y':
        print("   Cancelled.")
        conn.close()
        return False

    # 7. Backup database
    timestamp = int(time.time())
    backup_path = db_path + f'.backup_{timestamp}'
    shutil.copy2(db_path, backup_path)
    print(f"\nDatabase backed up to: {backup_path}")

    # 8. Build new entries
    new_entries = b''
    added_count = 0

    for conv_id, meta in missing.items():
        created_seconds = int(meta.get('ctime', meta['mtime']))
        modified_seconds = int(meta['mtime'])
        step_count = estimate_step_count(meta['size'])
        title = f"Recovered Conversation {conv_id[:8]}"

        inner = build_trajectory_inner(
            title=title,
            step_count=step_count,
            created_seconds=created_seconds,
            created_nanos=0,
            session_id=conv_id,
            status=1,
            last_modified_seconds=modified_seconds,
            last_modified_nanos=0,
        )

        entry = build_trajectory_entry(conv_id, inner)
        wrapped = wrap_as_outer_entry(entry)
        new_entries += wrapped
        added_count += 1

        dt = datetime.datetime.fromtimestamp(modified_seconds)
        print(f"   + Added: {conv_id[:8]}...  |  {dt.strftime('%Y-%m-%d %H:%M')}")

    # 9. Merge data
    updated_data = existing_data + new_entries
    print(f"\nIndex size: {len(existing_data)} -> {len(updated_data)} bytes (+{len(new_entries)})")

    # 10. Validate protobuf format (optional, requires protoc)
    try:
        result = subprocess.run(
            ['protoc', '--decode_raw'],
            input=updated_data,
            capture_output=True
        )
        if result.returncode != 0:
            print(f"ERROR: Protobuf format validation failed: {result.stderr.decode()}")
            print("   Aborting. Your database has NOT been modified.")
            conn.close()
            return False
        print("Protobuf format validation passed")
    except FileNotFoundError:
        print("Note: protoc not found, skipping format validation (this is usually fine)")

    # 11. Verify new UUIDs are in the updated data
    updated_uuids = get_indexed_uuids(updated_data)
    all_verified = True
    for conv_id in missing:
        if conv_id not in updated_uuids:
            print(f"ERROR: Verification failed: {conv_id} not found in updated data")
            all_verified = False

    if not all_verified:
        print("ERROR: Data verification failed. Aborting. Your database has NOT been modified.")
        conn.close()
        return False

    # 12. Write to database
    updated_b64 = base64.b64encode(updated_data).decode('ascii')
    cursor.execute(
        "UPDATE ItemTable SET value = ? WHERE key = 'antigravityUnifiedStateSync.trajectorySummaries'",
        (updated_b64,)
    )
    conn.commit()
    conn.close()

    print(f"\nSuccess! Added {added_count} conversation(s) to the history index.")
    print("\nNext steps:")
    print("   1. Open the Antigravity app")
    print("   2. Check Chat History - your recovered conversations should now be visible")
    print("   3. The titles will show as 'Recovered Conversation XXXXXXXX'")
    print("      (they will update to the real title once you open them)")
    print(f"\n   If anything goes wrong, restore the backup:")
    print(f"   cp '{backup_path}' '{db_path}'")

    return True


if __name__ == '__main__':
    success = main()
    print()
    if not success:
        print("Repair not completed. Check the error messages above.")
    print("=" * 60)

Let me know if it works for you! And if you’re on Linux or Windows, the script should auto-detect the correct paths — but let me know if you run into any issues.

2 Likes

Thanks for sharing @Brevin :folded_hands:

So actually, after asking you, I pasted your earlier msg into Claude and got it to write a python script for me. Running it rescued 4 conversations that had gotten lost! :man_facepalming:

They really need to fix this flawed design…

2 Likes

Hello @Brevin @Ben_Hutchison @Michaol, welcome to AI Forum!

Thank you for bringing the issue to our attention and sharing the work around. I have forwarded your findings to the relevant team. Our engineering team is currently investigating the matter, and we appreciate your patience as we work toward a resolution.