Jeriko – an AI agent that runs directly inside your OS
1 points
1 hour ago
| 1 comment
| jeriko.ai
| HN
Khaleel7337
1 hour ago
[-]
Hi HN,

Over the last few months we’ve been experimenting with a different direction for AI assistants.

Most AI tools today live in the browser. You ask questions, the model responds with text, and then you manually execute whatever action you wanted.

We wondered: what happens if the AI lives inside the operating system instead?

So we built Jeriko, an AI operator that runs directly on your machine.

Instead of a chatbot, the AI has access to the environment where work actually happens — your filesystem, terminal, browser, APIs, and automation hooks.

Installation is one command:

curl -fsSL https://jeriko.ai/install.sh | bash

After installation, Jeriko runs as a local daemon on macOS or Linux.

It exposes the system as tools that an LLM can use.

What it can do

Jeriko can:

• execute shell commands • read and write files • watch directories • trigger workflows • respond to webhooks • interact with APIs • automate development tasks

The idea is that the computer itself becomes the agent environment.

Example workflows

Remote machine control

Jeriko can connect to messaging platforms like Telegram. You can send instructions from your phone:

“deploy the staging branch” “check server logs” “find that PDF I downloaded last week”

The commands execute directly on your machine.

Continuous automation

Jeriko supports triggers such as:

• cron schedules • filesystem watchers • webhooks • email listeners

Example:

Every morning it summarizes GitHub issues.

Or when Stripe fires a webhook it updates internal dashboards.

Autonomous project generation

Jeriko can also scaffold and deploy projects directly on the host machine.

Some quick examples it generated and deployed:

https://public-rosy-five.vercel.app

https://greenfield-ventures.vercel.app

https://london-dental-clinic-seven.vercel.app

https://makeup-pricing-review.vercel.app

https://quicksite-pied.vercel.app

Model support

Jeriko is model-agnostic.

It currently works with:

• GPT • Claude • local models via Ollama • other providers

Models can be swapped via configuration.

Architecture

Jeriko runs as a background daemon that exposes OS capabilities as tools:

filesystem shell network requests browser automation connectors (GitHub, Stripe, Gmail, etc)

The LLM orchestrates these tools to execute tasks.

Persistent memory allows the agent to accumulate knowledge about workflows and projects.

Design goals

Local-first architecture

Model-agnostic runtime

OS-level automation instead of chat interfaces

Long-running agents via triggers

We’re curious what the HN community thinks.

Does running AI agents directly inside the OS make sense as a direction?

Or do you think agents should remain sandboxed environments like E2B or containerized runtimes?

Project: https://jeriko.ai

reply
Khaleel7337
1 hour ago
[-]
A few people asked about safety / permissions.

Jeriko doesn't run arbitrary commands across the entire system by default.

We added a few safeguards:

1. Workspace sandbox

All code execution happens inside a dedicated folder called `workspace`. The agent operates within that directory when generating code, running scripts, or modifying files. This prevents it from touching unrelated parts of the system unless explicitly allowed.

2. Snapshots

Jeriko can create snapshots of the workspace before executing larger operations. If something goes wrong, the environment can revert back to the previous snapshot.

3. Controlled execution

Commands are not blindly executed. The system evaluates tool calls and applies validation before running actions that modify files or run processes.

The goal is to let the AI operate on the machine while still keeping a controlled environment for experimentation.

We're still exploring the right balance between capability and safety, so feedback from people here is very helpful.

reply