Lighthouse incident timeline showing AI tool activity and prompt injection chain evidence

Private beta for AI coding assistant security

Lighthouse

Hosted security control plane for teams adopting AI coding assistants, MCP servers, and AI-built applications. Lighthouse helps AppSec see risky agent behavior, investigate prompt-injection chains, and prepare policy controls around the tools developers already use.

AI tool activity
Captured
MCP exposure
Scored
Incident chains
Rebuilt

What it protects

Security coverage for the new developer attack surface.

Coding assistants now read repos, call tools, inspect terminals, connect to MCP servers, and write production code. Lighthouse gives security teams a managed place to observe, govern, and investigate that activity during a controlled beta rollout.

Agent activity monitoring

See how Claude Code, Cursor, Windsurf, and other AI coding workflows use files, shell commands, network calls, repos, secrets, and local tools across enrolled developer devices.

MCP server inventory

Discover registered and shadow MCP servers, inspect exposed tools, and review risky access paths before they become unmanaged dependencies.

Prompt injection investigation

Connect hostile tool output, model behavior, and follow-on actions into an evidence chain your security team can investigate.

AI-built app review

Scan applications built with agents for common high-impact flaws such as IDOR, SSRF, unsafe execution, weak signing, and injection patterns.

How it works

From AI activity to security action.

Lighthouse is built for the workflow AppSec needs when coding agents become part of day-to-day engineering.

01

Enroll

Connect a small developer cohort and capture agent activity with signed ingestion.

02

Observe

See tool calls, MCP exposure, repos, devices, sessions, and risky behavior in one dashboard.

03

Investigate

Rebuild prompt-injection, insider-misuse, and AI-built app findings with evidence attached.

04

Act

Apply findings-as-code policy, route evidence to SIEM or tickets, and export reports.

Lighthouse dashboard showing AI tool activity across developer devices

Analyst workflow

From noisy AI activity to evidence your security team can use.

Lighthouse separates insider misuse, prompt injection, and normal development activity into lanes that match early response workflows. Analysts can move from fleet view to one session timeline without rebuilding context by hand.

Prompt injection source and trigger are linked in one finding.
Tool activity keeps device, user, repo, and session context together.
Findings can be exported into reports, tickets, audit logs, and SIEM streams.

Product proof

The important proof, without a long benchmark page.

Lighthouse is being shaped around the evidence security leaders ask for before onboarding: what it finds, how it is operated, and how the hosted beta is controlled.

Benchmark signal

In a blind rediscovery benchmark against published AI-app advisories, Lighthouse found 13 of 28 in-scope issues: 10/17 in langflow and 3/11 in open-webui.

Customer controls

HMAC-signed agent ingestion, offline queue health, findings-as-code policy, audit logs, SIEM and ticket routing, PDF reports, and SARIF export are implemented.

Operational evidence

GitHub Actions OIDC image delivery, signed agent artifacts, migration checks, and smoke gates are already part of the beta process.

Lighthouse fleet dashboard showing endpoint agent status
Fleet status

Private beta

Bring one team, one AI coding workflow, and one onboarding window.