Skip to content

sidhanthp/clicky

 
 

Repository files navigation

Hi, this is Clicky.

It's an AI teacher that lives as a buddy next to your cursor. It can see your screen, talk to you, and even point at stuff. Kinda like having a real teacher next to you.

Download it here for free.

Here's the original tweet that kinda blew up for a demo for more context.

Clicky — an ai buddy that lives on your mac

This is the open-source version of Clicky for those that want to hack on it, build their own features, or just see how it works under the hood.

Get started with Claude Code

The fastest way to get this running is with Claude Code.

Once you get Claude running, paste this:

Hi Claude.

Clone https://github.com/farzaa/clicky.git into my current directory.

Then read the CLAUDE.md. I want to get Clicky running locally on my Mac.

Help me set up everything — the Cloudflare Worker with my own API keys, the proxy URLs, and getting it building in Xcode. Walk me through it.

That's it. It'll clone the repo, read the docs, and walk you through the whole setup. Once you're running you can just keep talking to it — build features, fix bugs, whatever. Go crazy.

Manual setup

If you want to do it yourself, here's the deal.

Prerequisites

  • macOS 14.2+ (for ScreenCaptureKit)
  • Xcode 15+
  • Node.js 18+ (for the Cloudflare Worker)
  • A Cloudflare account (free tier works)
  • An OpenAI API key

1. Set up the Cloudflare Worker

The Worker is a tiny proxy that holds your API keys. The app talks to the Worker, the Worker talks to the APIs. This way your keys never ship in the app binary.

cd worker
npm install

Now add your secrets. Wrangler will prompt you to paste each one:

npx wrangler secret put OPENAI_API_KEY

The worker also exposes non-secret defaults for speech:

[vars]
OPENAI_TTS_MODEL = "gpt-4o-mini-tts"
OPENAI_TTS_VOICE = "cedar"

Deploy it:

npx wrangler deploy

It'll give you a URL like https://your-worker-name.your-subdomain.workers.dev. Copy that.

2. Run the Worker locally (for development)

If you want to test changes to the Worker without deploying:

cd worker
npx wrangler dev

This starts a local server (usually http://localhost:8787) that behaves exactly like the deployed Worker. You'll need to create a .dev.vars file in the worker/ directory with your keys:

OPENAI_API_KEY=sk-...

Then set WorkerBaseURL in leanring-buddy/Info.plist to http://localhost:8787 while developing locally.

3. Update the proxy URLs in the app

Set WorkerBaseURL in leanring-buddy/Info.plist to your deployed Worker URL, for example:

<key>WorkerBaseURL</key>
<string>https://your-worker-name.your-subdomain.workers.dev</string>

4. Open in Xcode and run

open leanring-buddy.xcodeproj

In Xcode:

  1. Select the leanring-buddy scheme (yes, the typo is intentional, long story)
  2. Set your signing team under Signing & Capabilities
  3. Hit Cmd + R to build and run

The app will appear in your menu bar (not the dock). Click the icon to open the panel, grant the permissions it asks for, and you're good.

Permissions the app needs

  • Microphone — for push-to-talk voice capture
  • Accessibility — for the global keyboard shortcut (Control + Option)
  • Screen Recording — for taking screenshots when you use the hotkey
  • Screen Content — for ScreenCaptureKit access

Architecture

If you want the full technical breakdown, read CLAUDE.md. But here's the short version:

Menu bar app (no dock icon) with two NSPanel windows — one for the control panel dropdown, one for the full-screen transparent cursor overlay. Push-to-talk streams audio to OpenAI Realtime transcription, sends the transcript + screenshot to the OpenAI Responses API via streaming SSE, and plays the response through the OpenAI speech API. The model can embed [POINT:x,y:label:screenN] tags in its responses to make the cursor fly to specific UI elements across multiple monitors. Menu bar app (no dock icon) with two NSPanel windows — one for the control panel dropdown, one for the full-screen transparent cursor overlay. Push-to-talk streams audio to OpenAI Realtime transcription using a worker-minted ephemeral session, sends the transcript + screenshot to the OpenAI Responses API via streaming SSE, and plays the response through the OpenAI speech API. The model can embed [POINT:x,y:label:screenN] tags in its responses to make the cursor fly to specific UI elements across multiple monitors.

Project structure

leanring-buddy/          # Swift source (yes, the typo stays)
  CompanionManager.swift    # Central state machine
  CompanionPanelView.swift  # Menu bar panel UI
  OpenAIAPI.swift           # OpenAI Responses streaming client
  OpenAITTSClient.swift     # OpenAI speech playback
  OverlayWindow.swift       # Blue cursor overlay
  OpenAIAudioTranscriptionProvider.swift # Real-time transcription
  BuddyDictation*.swift     # Push-to-talk pipeline
worker/                  # Cloudflare Worker proxy
  src/index.ts              # Three routes: /responses, /speech, /transcription-session
CLAUDE.md                # Full architecture doc (agents read this)

Contributing

PRs welcome. If you're using Claude Code, it already knows the codebase — just tell it what you want to build and point it at CLAUDE.md.

Got feedback? DM me on X @farzatv.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Swift 94.8%
  • Shell 3.9%
  • TypeScript 1.3%