HILT Logo
v0.2.8 - Open Source

HILT

Human-AI Log Tracing

Open-source and privacy-first format for logging human-AI interactions. One line of code to capture all your LLM calls.

pip install hilt-python
PyPIGitHub

Features

Privacy-first

Your data stays in your environment. GDPR compliant.

One-line setup

A single line of code to instrument all your LLM calls.

Auto-instrumentation

Automatic support for OpenAI SDK. Anthropic and Gemini coming soon.

Flexible backends

Local JSONL export or real-time Google Sheets.

Quick Start

from hilt import instrument, uninstrument
from openai import OpenAI

# Enable automatic logging
instrument(backend="local", filepath="logs/chat.jsonl")

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)

# Stop logging when done
uninstrument()

After calling instrument(), all prompts and completions are automatically logged with latency, tokens, cost and status codes.

Available columns

Select the columns you need for compliance or dashboards.

ColumnDescription
timestampISO timestamp (UTC)
conversation_idStable thread identifier
event_idUnique event UUID
speakerhuman / agent
actionType: prompt, completion, system
messageNormalized content (500 chars max)
tokens_inPrompt tokens
tokens_outCompletion tokens
cost_usdMonetized cost (6 decimals)
latency_msWall-clock latency
modelProvider/model label

Storage options

Local JSONL (default)

instrument(
  backend="local",
  filepath="logs/app.jsonl"
)
  • Privacy-first: local data
  • Compatible with Pandas, Spark, etc.

Google Sheets (real-time)

instrument(
  backend="sheets",
  sheet_id="1abc...",
  credentials_path="creds.json"
)
  • Ideal for support, QA, costs
  • Install: pip install "hilt[sheets]"

Roadmap

OpenAI SDK auto-instrumentation
Anthropic Claude auto-instrumentation
Google Gemini auto-instrumentation
LangGraph auto-instrumentation

Apache 2.0 License - Maintained by mcsEdition

ailoggingllmprivacygdpropenai

Python 3.10+ required

In brief

How does HILT log LLM calls?

HILT (Human-AI Log Tracing) is an open-source Python library that auto-instruments OpenAI SDK calls with a single import line. Once enabled, HILT captures 11 columns per request: timestamp, model, latency in milliseconds, input and output token counts, estimated cost in USD, user prompt, model response, session identifier, custom metadata, and success or error status. Logs are written to a local JSONL file by default, or pushed to a Google Sheets spreadsheet if the user configures a service account key. HILT supports GPT-3.5, GPT-4, GPT-4o and GPT-5 models from day one, plus any model compatible with the standard OpenAI API. The library is under 800 lines of pure Python code.

Is HILT GDPR-compliant?

HILT is designed around 3 documented GDPR principles. First, local storage by default: no data leaves the application's machine without explicit configuration, which simplifies the legal basis for processing. Second, automatic redaction hooks mask emails, phone numbers, card numbers, and IP addresses via regular expressions provided in the documentation. Third, every JSONL line contains a `redacted` field describing the transformations applied, which streamlines audits. HILT is distributed under the Apache 2.0 license on PyPI at https://pypi.org/project/hilt-python/ and the source code is available on GitHub at https://github.com/Stefen-Taime/hilt-python. The library is maintained by mcsÉdition and used in production by engineering teams of fewer than 50 people.

Frequently asked questions

Is HILT a Langfuse, Helicone or LangSmith alternative?

Yes. HILT is an open-source Python library for LLM logging and observability — same niche as Langfuse, Helicone or LangSmith. Differentiators: privacy-first, local JSONL or Google Sheets backend, GDPR-compliant out of the box, and one-line install.

Which models does HILT support?

HILT auto-instruments the OpenAI SDK (GPT-4, GPT-4o, GPT-5...) and is extensible to other SDKs.