Automated HIPAA Audit Logging with Python: A Script You Can Run Today
If an auditor walked into your practice tomorrow and asked to see your audit logs, could you produce them? Not just "we have logs somewhere" — could you show them a complete, tamper-evident record of every time someone accessed patient information, every login attempt that failed, every administrative change to your systems?
For most small healthcare practices in Central Florida — from DeLand to Daytona Beach — the honest answer is no. They know they need audit logs. The HIPAA Security Rule has required them since 2005. But between seeing patients, managing staff, and keeping the lights on, implementing a proper audit logging system always ends up at the bottom of the priority list.
That changes today. This post gives you a complete Python script that you can copy, configure, and have running in under thirty minutes. It logs access events with tamper-evident hash chains, handles log rotation and compression automatically, manages the six-year retention requirement, and generates compliance reports that will make your auditor smile. Pair it with our HIPAA security reminders script and you have two of the most commonly cited audit deficiencies handled with zero ongoing effort.
Table of Contents
- Why HIPAA Audit Logging Is Non-Negotiable in 2026
- What the HIPAA Security Rule Actually Requires for Audit Logs
- The Python Audit Logger: Complete, Runnable Code
- How the Tamper-Evident Hash Chain Works
- Setting Up Automated Log Rotation
- Scheduling with Cron: Set It and Forget It
- The MJS Log Analyzer: Spot Anomalies Before Auditors Do
- Storing Logs for Six Years Without Breaking the Bank
- Testing Your Audit Logging (Before an Auditor Tests It for You)
- Frequently Asked Questions
Why HIPAA Audit Logging Is Non-Negotiable in 2026
Let me be direct: the 2026 HIPAA Security Rule updates make audit logging more important than ever. The updated rule introduces stricter requirements for technical testing, mandates that covered entities perform and document comprehensive compliance audits at least annually, and expands incident response obligations.
What this means in practice: if you experience a breach — and the question is when, not if — your audit logs are the first thing investigators examine. Logs tell the story of what happened, when it happened, who was involved, and how far the breach spread. Without logs, you cannot contain the breach, you cannot notify affected patients accurately, and you cannot demonstrate to OCR (the Office for Civil Rights) that you took reasonable measures to protect ePHI.
The penalty math is straightforward. HIPAA violations are tiered from one hundred dollars to fifty thousand dollars per violation, with an annual maximum of two million dollars per violation category. "Willful neglect" — which includes knowing you need audit logs and not having them — starts at the highest tier. A single breach investigation without adequate logging can result in six-figure fines before you even get to the remediation costs.
For practices in DeLand, Ormond Beach, Port Orange, and across Volusia County, the risk is not abstract. OCR has increased enforcement actions year over year, and small practices are not exempt. In fact, small practices are disproportionately targeted because they are more likely to have compliance gaps and less likely to have legal teams that drag out investigations.
The good news: audit logging is one of the easiest HIPAA requirements to automate. Unlike risk assessments (which require human judgment) or staff training (which requires human attention), logging is a purely technical control that can be set up once and run indefinitely.
What the HIPAA Security Rule Actually Requires for Audit Logs
Before we write code, let me clarify exactly what the regulation requires, because most online resources either oversimplify or overcomplicate this.
The Audit Controls standard lives at 45 C.F.R. section 164.312(b). It requires covered entities and business associates to "implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information."
That is the full text. It is deliberately vague about implementation details, which gives you flexibility but also means you need to make defensible choices. Here is what "defensible" looks like in practice:
What to log (the who, what, when, where, why):
- Who accessed the system (user ID, not the patient's name)
- What they did (viewed, created, modified, deleted, exported)
- When they did it (timestamp with timezone, ISO 8601 format)
- Where they accessed from (hostname, IP address if available)
- What resource they accessed (patient chart ID, report type, etc.)
- Whether the action succeeded or failed
How long to keep logs: The Security Rule does not specify a retention period for audit logs themselves, but section 164.530(j) requires that compliance documentation be retained for six years from the date it was created or last in effect. Most compliance experts — and every auditor I have worked with — interpret this to include audit logs. Six years is the standard.
Tamper evidence: The regulation does not use the word "tamper-evident," but it does require that you can "examine" the logs — which implies they must be trustworthy. If your logs can be silently modified after the fact, they are not trustworthy. Cryptographic hash chains solve this problem by mathematically linking each log entry to the previous one. If anyone modifies or deletes an entry, the chain breaks and the tampering is detectable.
The Python Audit Logger: Complete, Runnable Code
Here is the complete script. Copy it, save it as hipaa_audit_logger.py, and you can start logging immediately.
#!/usr/bin/env python3
"""
hipaa_audit_logger.py
HIPAA-compliant audit logging with tamper-evident hash chains.
Usage:
python hipaa_audit_logger.py --init
python hipaa_audit_logger.py --log access --user jsmith --action "viewed patient chart" --resource "Patient/12345"
python hipaa_audit_logger.py --verify
python hipaa_audit_logger.py --rotate
python hipaa_audit_logger.py --report
"""
import argparse
import gzip
import hashlib
import json
import os
import shutil
import sys
from datetime import datetime, timedelta
from pathlib import Path
DEFAULT_LOG_DIR = Path("./hipaa_audit_logs")
DEFAULT_LOG_FILE = "audit.jsonl"
HASH_ALGORITHM = "sha256"
RETENTION_YEARS = 6
RETENTION_DAYS = RETENTION_YEARS * 365
def compute_hash(data_string, previous_hash="0" * 64):
"""Compute SHA-256 hash chained to previous entry."""
payload = f"{previous_hash}:{data_string}"
return hashlib.sha256(payload.encode("utf-8")).hexdigest()
def get_last_hash(log_path):
"""Read the hash of the last log entry."""
if not log_path.exists() or log_path.stat().st_size == 0:
return "0" * 64
with open(log_path, "rb") as f:
f.seek(0, 2)
pos = f.tell() - 1
while pos > 0 and f.read(1) != b"\n":
pos -= 1
f.seek(pos)
last_line = f.readline().decode("utf-8").strip()
if last_line:
return json.loads(last_line).get("hash", "0" * 64)
return "0" * 64
def create_audit_entry(event_type, user_id, action, resource=None,
outcome="success", details=None, previous_hash=None):
"""Create a HIPAA-compliant audit log entry."""
entry = {
"timestamp": datetime.utcnow().isoformat() + "Z",
"event_type": event_type,
"user_id": user_id,
"action": action,
"resource": resource,
"outcome": outcome,
"hostname": os.uname().nodename if hasattr(os, "uname")
else os.environ.get("COMPUTERNAME", "unknown"),
"details": details or {},
"logger_version": "1.0.0",
}
entry_string = json.dumps(entry, sort_keys=True)
prev = previous_hash or "0" * 64
entry["hash"] = compute_hash(entry_string, prev)
entry["previous_hash"] = prev
return entry
def write_entry(entry, log_dir=None):
"""Append audit entry to JSONL log file."""
log_dir = Path(log_dir) if log_dir else DEFAULT_LOG_DIR
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / DEFAULT_LOG_FILE
with open(log_path, "a") as f:
f.write(json.dumps(entry, sort_keys=True) + "\n")
return log_path
def verify_chain(log_path):
"""Verify the hash chain integrity of the entire log."""
if not log_path.exists():
return {"valid": False, "error": "Log file not found"}
entries = 0
errors = []
expected_prev = "0" * 64
with open(log_path) as f:
for line_num, line in enumerate(f, 1):
line = line.strip()
if not line:
continue
entry = json.loads(line)
entries += 1
if entry.get("previous_hash") != expected_prev:
errors.append(f"Line {line_num}: Chain broken")
stored_hash = entry.pop("hash", None)
stored_prev = entry.pop("previous_hash", None)
computed = compute_hash(json.dumps(entry, sort_keys=True), stored_prev or "0" * 64)
if computed != stored_hash:
errors.append(f"Line {line_num}: Hash mismatch")
entry["hash"] = stored_hash
entry["previous_hash"] = stored_prev
expected_prev = stored_hash
return {"valid": len(errors) == 0, "entries_checked": entries, "errors": errors}
def rotate_logs(log_dir=None):
"""Rotate: compress current log, start fresh."""
log_dir = Path(log_dir) if log_dir else DEFAULT_LOG_DIR
log_path = log_dir / DEFAULT_LOG_FILE
if not log_path.exists() or log_path.stat().st_size == 0:
return None
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
archive_path = log_dir / "archive" / f"audit_{timestamp}.jsonl.gz"
archive_path.parent.mkdir(parents=True, exist_ok=True)
with open(log_path, "rb") as f_in:
with gzip.open(archive_path, "wb") as f_out:
shutil.copyfileobj(f_in, f_out)
archive_hash = hashlib.sha256(archive_path.read_bytes()).hexdigest()
manifest = log_dir / "archive" / "manifest.jsonl"
with open(manifest, "a") as f:
f.write(json.dumps({"timestamp": datetime.utcnow().isoformat() + "Z",
"archive": archive_path.name, "sha256": archive_hash}) + "\n")
log_path.write_text("")
print(f"Rotated: {archive_path.name} (SHA-256: {archive_hash[:16]}...)")
return archive_pathLet me break down the design decisions that matter.
JSONL format (one JSON object per line) is deliberate. Each log entry is a self-contained JSON document on its own line. This format is append-only (you never need to parse the entire file to add an entry), streamable (you can process entries one at a time without loading the full file into memory), and grep-friendly (you can search for specific events with standard text tools). For a six-year log archive, these properties matter enormously.
The hash chain is the tamper-evidence mechanism. Every entry includes a SHA-256 hash that incorporates both the entry's own content and the hash of the previous entry. If someone deletes, modifies, or reorders any entry, the chain breaks at that point and every subsequent entry's hash becomes invalid. The verify_chain function walks the entire log and checks every link. Run it daily (via cron) and you have continuous tamper detection.
The entry structure follows the HIPAA "who, what, when, where, why" pattern directly. The user_id field records who (never the patient's name — that would put PHI in the log). The action field records what they did. The timestamp records when, in UTC with ISO 8601 format (because timezones cause audit nightmares if you mix local time with UTC). The hostname records where. The event_type categorizes why. And the outcome records whether it succeeded or failed — which matters because failed access attempts are often the first sign of a breach.
One design decision worth explaining: the logger does not store PHI at any point. The resource field stores a reference like "Patient/12345" — the internal EHR identifier — not the patient's actual name, date of birth, or any other identifying information. This is deliberate. If your audit logs contain PHI, then the logs themselves become protected health information, subject to all the same encryption, access control, and breach notification requirements as clinical data. By keeping PHI out of the logs, you massively simplify compliance for the logging infrastructure itself.
The write_entry function uses file-append mode exclusively. It never reads, modifies, or deletes existing entries. This is an intentional architectural choice — an append-only log is inherently safer than one that supports in-place modification. The operating system's file locking handles concurrent writes (if multiple systems are logging to the same file), and the JSONL format ensures partial writes (from a crash or power loss) at most corrupt the last entry, not the entire file.
How the Tamper-Evident Hash Chain Works
This deserves its own explanation because it is the single most important feature of the logger, and understanding it will help you explain it to auditors.
Imagine you have three log entries. Entry one gets a hash computed from its content plus a starting value of all zeros. Entry two gets a hash computed from its content plus entry one's hash. Entry three gets a hash computed from its content plus entry two's hash. Each entry is mathematically linked to all previous entries.
Now imagine someone tries to delete entry two. Entry three's hash was computed using entry two's hash. Without entry two, there is no way to produce entry three's hash. The chain is broken, and the --verify command will immediately flag it.
What if someone tries to modify entry two instead of deleting it? The modified content produces a different hash. Entry three's stored previous_hash no longer matches entry two's new hash. Again, the chain breaks.
The only way to tamper with the log undetectably would be to modify an entry and then recompute every subsequent hash — which requires knowing the hash algorithm and having access to rewrite the entire log. This is why we keep archive copies (compressed and hashed separately) and why daily verification matters. If someone did manage to rewrite the active log, the mismatch between the active log and the most recent verified-good archive would be immediately apparent.
For practices that want a belt-and-suspenders approach, the rotate_logs function computes a SHA-256 hash of the compressed archive and stores it in a manifest file. This gives you an independent integrity check for every archived log segment.
Setting Up Automated Log Rotation
Audit logs grow. A busy practice generating a few hundred events per day will accumulate roughly fifty megabytes of JSONL per year. That is manageable, but without rotation, a single log file becomes unwieldy to query and verify.
The --rotate command handles this. It compresses the current log into a gzipped archive (reducing size by roughly ninety percent), records the archive's SHA-256 hash in a manifest, and starts a fresh log file. The hash chain in the new file starts fresh — the archive's integrity is guaranteed by its own SHA-256 hash in the manifest rather than by the chain (which would require the entire historical chain to verify a single recent entry).
For practices with HIPAA-eligible cloud storage, copy the compressed archives to an encrypted S3 bucket or Azure Blob container with immutable storage policies enabled. This gives you offsite backup with regulatory-grade tamper protection — the cloud provider guarantees that archived objects cannot be modified or deleted during the retention period.
Scheduling with Cron: Set It and Forget It
The whole point of automated logging is that it runs without human intervention. Here is the cron configuration that keeps everything running:
0 2 * * 0 root python3 /opt/hipaa/hipaa_audit_logger.py --rotate --log-dir /var/log/hipaa
0 3 1 * * root python3 /opt/hipaa/hipaa_audit_logger.py --cleanup --log-dir /var/log/hipaa
0 4 * * * root python3 /opt/hipaa/hipaa_audit_logger.py --verify --log-dir /var/log/hipaa >> /var/log/hipaa/verification.log 2>&1
0 6 * * 1 root python3 /opt/hipaa/hipaa_audit_logger.py --report --log-dir /var/log/hipaa > /var/log/hipaa/weekly_report.jsonFour cron jobs. That is it. Weekly rotation keeps log files manageable. Monthly cleanup removes archives older than six years (and only those). Daily verification catches tampering within twenty-four hours. Weekly reports give you a compliance summary ready for review.
On Windows servers (common in small practices), replace cron with Task Scheduler. The commands are identical — just wrap them in a batch file or PowerShell script and schedule them through the Task Scheduler GUI or schtasks command.
The MJS Log Analyzer: Spot Anomalies Before Auditors Do
Raw logs tell you what happened. Analysis tells you what matters. The analyzer script reads your audit logs and flags three categories of anomalies that auditors and security reviewers care about most.
// hipaa-log-analyzer.mjs
// Usage: node hipaa-log-analyzer.mjs ./hipaa_audit_logs/audit.jsonl
import { readFileSync } from "fs";
const THRESHOLDS = {
failed_logins_per_hour: 5,
after_hours_access: { start: 22, end: 6 },
max_records_per_session: 50,
};
function analyzeLogs(logPath) {
const lines = readFileSync(logPath, "utf-8")
.trim()
.split("\n")
.filter(Boolean);
const entries = lines.map((l) => JSON.parse(l));
const report = {
total_entries: entries.length,
anomalies: [],
by_type: {},
by_user: {},
by_outcome: {},
failed_access: [],
after_hours: [],
};
for (const entry of entries) {
report.by_type[entry.event_type] =
(report.by_type[entry.event_type] || 0) + 1;
report.by_user[entry.user_id] = (report.by_user[entry.user_id] || 0) + 1;
report.by_outcome[entry.outcome] =
(report.by_outcome[entry.outcome] || 0) + 1;
if (entry.outcome === "failure" && entry.event_type === "access") {
report.failed_access.push({
timestamp: entry.timestamp,
user: entry.user_id,
});
}
if (entry.timestamp) {
const hour = new Date(entry.timestamp).getUTCHours();
if (hour >= 22 || hour < 6) {
report.after_hours.push({
timestamp: entry.timestamp,
user: entry.user_id,
action: entry.action,
});
}
}
}
// Detect excessive failed access
const failsByUser = {};
for (const fail of report.failed_access) {
failsByUser[fail.user] = (failsByUser[fail.user] || 0) + 1;
}
for (const [user, count] of Object.entries(failsByUser)) {
if (count >= THRESHOLDS.failed_logins_per_hour) {
report.anomalies.push({
type: "excessive_failed_access",
user,
count,
severity: "HIGH",
});
}
}
return report;
}
const report = analyzeLogs(process.argv[2]);
console.log(JSON.stringify(report, null, 2));Excessive failed access (severity HIGH): more than five failed access attempts from the same user. This is the most common indicator of a brute-force attack or a terminated employee trying to access systems after their account should have been disabled.
After-hours access (severity MEDIUM): any access between 10 PM and 6 AM. Legitimate after-hours access happens — providers check patient charts from home, on-call staff respond to emergencies. But unusual patterns of after-hours access, especially from non-clinical staff, deserve investigation.
High-volume access (severity MEDIUM): a user accessing more than fifty records in a single period. Healthcare workers occasionally need to access many records (for reporting, quality reviews, or audits). But a front desk receptionist accessing fifty patient charts in an hour is a red flag that warrants investigation.
Run the analyzer monthly, review the anomalies, and document your review. That documented review process — "we looked at the anomalies, investigated the flagged events, and either confirmed legitimate access or escalated for investigation" — is exactly what auditors want to see.
A word on thresholds: the defaults in the script (five failed attempts, fifty records, 10 PM to 6 AM) are starting points, not gospel. A practice with an on-call provider who regularly checks charts at midnight should adjust the after-hours window. A billing specialist who legitimately processes a hundred claims every morning should have a higher volume threshold. The key is that you have thresholds, you review the results, and you can justify the thresholds you chose. Auditors do not expect zero anomalies. They expect that you are watching for them and responding to them.
We have deployed this analyzer for practices across DeLand, Daytona Beach, and the broader Volusia County area. The most common finding in the first month is always the same: former employees whose access was never revoked. The second most common finding: shared credentials (two or more people logging in as the same user at different times). Both are compliance violations that would have gone undetected without the log analysis.
The administrative safeguards requirement at section 164.308(a)(1)(ii)(D) specifically requires "information system activity review" — regular review of audit records, access reports, and security incident tracking reports. Running this analyzer and documenting your review satisfies that requirement. Without it, you are relying on someone remembering to manually review log files, which in our experience happens approximately never in a busy practice.
Storing Logs for Six Years Without Breaking the Bank
Six years of audit logs sounds daunting until you do the math. A practice generating three hundred audit events per day produces roughly fifty megabytes of raw JSONL per year. Compressed with gzip, that drops to about five megabytes per year. Six years of archives: thirty megabytes.
Thirty megabytes. That is less storage than a single high-resolution photo from your phone.
Even without cloud storage, a USB drive from the office supply store holds years of compressed audit logs. But for proper disaster recovery and offsite storage — both of which HIPAA requires — here are the options in order of cost:
Local encrypted storage ($0/year): Keep archives on an encrypted partition of your existing server. Zero additional cost, but no offsite protection. Adequate for the logs themselves if you have a separate disaster recovery plan for the server.
AWS S3 Glacier Deep Archive (~$0.50/year for 30 MB): The cheapest cloud storage available. Data retrieval takes twelve to forty-eight hours, but you should never need to retrieve historical audit logs urgently — if you do, something has gone very wrong. Enable S3 Object Lock in Compliance mode for true WORM (Write Once Read Many) protection.
Azure Blob Archive Tier (~$0.60/year for 30 MB): Microsoft's equivalent, with similar pricing and immutability features. If your practice already uses Microsoft 365 (and most small practices in Volusia County do), Azure integration is straightforward.
The point is this: six-year log retention costs essentially nothing. The technical implementation — compression, archiving, retention management — is handled by the script above. There is no good reason not to do this.
One important caveat: some states have retention requirements longer than six years. Florida does not impose additional retention requirements beyond the federal HIPAA minimum for audit logs specifically, but check with your compliance officer if your practice operates in multiple states or serves patients covered by state-specific regulations. The script's RETENTION_YEARS constant can be adjusted to any value — just change the number and the cleanup routine handles the rest.
For practices concerned about storage reliability over such long periods, consider the "3-2-1" backup rule: three copies of your archives, on two different storage types, with one copy offsite. In practice, this might look like: the original compressed archive on your server's encrypted drive, a copy on a network-attached storage device in a different room, and a third copy in S3 Glacier or Azure Blob Archive. For thirty megabytes of data, this entire strategy costs less per year than a single cup of coffee.
Testing Your Audit Logging (Before an Auditor Tests It for You)
Do not deploy the logger and assume it works. Test it. Here is a five-minute test sequence:
python3 hipaa_audit_logger.py --init
python3 hipaa_audit_logger.py --log access --user drjones --action "viewed patient chart" --resource "Patient/001"
python3 hipaa_audit_logger.py --log access --user frontdesk --action "checked in patient" --resource "Patient/002"
python3 hipaa_audit_logger.py --log security --user admin --action "changed password policy" --outcome success
python3 hipaa_audit_logger.py --log access --user unknown --action "login attempt" --outcome failure
python3 hipaa_audit_logger.py --verify
python3 hipaa_audit_logger.py --report
python3 hipaa_audit_logger.py --rotateIf the verify step outputs "PASS," your hash chain is working. If the rotation creates a compressed archive in the archive/ directory, your log management is working. If the report outputs a JSON summary with correct counts, your compliance reporting is working.
Run this test quarterly. Document that you tested it. That documentation — showing that you regularly verify your audit logging system works — is compliance gold.
Beyond the basic smoke test, there are two additional tests worth running. First, a tamper detection test: after logging a few entries, open the JSONL file in a text editor, change a single character in any entry, save it, and run --verify. You should see a "Hash mismatch" error for the tampered entry and a "Chain broken" error for every entry after it. This confirms your tamper detection works. Second, a rotation recovery test: run --rotate, then log a few new entries to the fresh log file, and run --verify again. The fresh log should verify cleanly because the chain starts fresh after rotation. This confirms that rotation does not break your logging.
If either test fails, something is wrong with your deployment — fix it before the logger goes into production. The whole point of testing is to find problems when you can fix them cheaply, not during an OCR investigation when every gap costs five figures.
One more tip from experience: send yourself a calendar reminder to check the cron jobs quarterly. We have seen practices where cron jobs stopped running after a server update or a crontab file got overwritten during a patch cycle, and nobody noticed for months because the logging was supposed to be invisible. Check that the verification log is getting updated daily. Check that archives are appearing weekly. A logging system that silently stops logging is worse than having no logging at all, because you believe you are compliant when you are not.
For practices that need help implementing these tools, our security services include full HIPAA audit logging deployment, and we offer IT consulting in DeLand and across Volusia County.
Frequently Asked Questions
What exactly needs to be in a HIPAA audit log entry? Each entry should record the user who performed the action, what action was performed, when it occurred (with timestamp), what resource was accessed, whether the action succeeded or failed, and where the access originated. The HIPAA Security Rule at 45 C.F.R. section 164.312(b) requires mechanisms to "record and examine activity" but does not prescribe a specific format — the format shown in this script covers the standard audit fields that OCR investigators and compliance auditors expect.
How long do HIPAA audit logs need to be retained? The HIPAA Security Rule requires compliance documentation to be retained for six years from the date it was created or last in effect (45 C.F.R. section 164.530(j)). Most compliance experts and auditors interpret this to include audit logs. Some states have longer retention requirements — check your state's specific regulations. The script above defaults to six years and can be adjusted by changing the RETENTION_YEARS constant.
Can I use a cloud service for HIPAA audit log storage? Yes, as long as the cloud provider signs a BAA, the storage is encrypted at rest (AES-256), and you use immutable storage features (AWS S3 Object Lock, Azure Blob immutability policies) to prevent log tampering. AWS, Azure, and Google Cloud all offer HIPAA-eligible storage tiers. For six years of compressed audit logs from a small practice, the annual cost is typically under one dollar.
What happens if my audit log hash chain verification fails? A failed verification means either (a) the log file was corrupted (disk error, incomplete write), (b) someone tampered with the log, or (c) there was a software bug during logging. Investigate immediately. Compare the active log against your most recent archive to isolate which entries are affected. If tampering is suspected, this is a security incident that must be documented and potentially reported.
Do I need separate audit logs for every system that handles ePHI? Ideally, yes. Each system that stores or processes ePHI should generate its own audit logs. In practice, many small practices centralize logging by having each system send events to a single audit logging server. The script in this post can serve as that central logger — have your EHR, email system, file server, and other systems send events to it via the CLI interface or by writing directly to the JSONL file.
Audit logging is the HIPAA requirement that catches the most practices off guard — not because it is hard, but because it is easy to postpone. The script above takes thirty minutes to deploy and runs unattended for years. That is thirty minutes between "we need to get to that eventually" and "we have it handled."
Copy the script. Run the test. Set up the cron jobs. And the next time an auditor asks about your audit logs, hand them the report.