An open-source voice assistant for mobile with real-time API integration. Think "Ollama for mobile + realtime voice."
Connects to your Google Drive, DeepWiki, Hacker News, Daily Hugging Face Top Papers, and the web.
It's currently a thin wrapper around the OpenAI Realtime speech API, however the long term vision is to make it extensible and pluggable, with a fully open source and privacy-first stack. To keep all data within your cloud perimeter, Azure OpenAI Private Endpoint can be configured as an alternative deployment option.
If this sounds interesting, ⭐️ the project on GitHub to help it grow.
hackernews-arty-demo.mp4
Whats's in the demo
- "What are top storiees on hacker news?"
- "What are commetns about montana law story?"
- "SUmmarize new montana law"
Arty_Demo.mp4
or view the full resolution version
Voice chat (home screen)
|
Text chat
|
Configure connectors
|
Voice AI is now incredibly powerful when connected to your data, yet current solutions are closed source, compromise your privacy, and are headed toward ads and lock-in.
This project offers a fully open alternative: local execution, no data monetization, and complete control over where your data goes.
Security note: TestFlight builds are compiled binaries; do not assume they exactly match this source code. If you require verifiability, build from source and review the code before installing.
Getting Started Instructions
- Create a new OpenAI API key. Grant the minimum realtime permissions shown below: (Models read, Model capabilities write)
- Grant access to Responses API.
- Paste the key into the onboarding wizard and tap Next.
- Connect Google Drive so Arty can see your files. OAuth tokens stay on-device. See Security + Privacy for details.
- Choose the Google account you want to use.
- Tap “Hide Advanced” and then “Go to vibemachine (unsafe).”
- Review the OAuth scopes that Arty is requesting.
- Confirm the connection. You should see a success screen when Drive is linked.
- Optional: Provide your own Google Drive Client ID for extra control.
- Finish the onboarding wizard.
- Start chatting with Arty.
How to get the most out of it
- Personalize Arty: adjust the system prompt, voice, VAD mode, and tool configuration from the Advanced settings sheets to match your workflow.
- Try out text chat mode when you can't use voice. Under settings, configure it to use text chat mode. Note, there's no streaming token support yet, so it feels pretty slow.
- Explore the connectors: Enable DeepWiki for documentation search, Hacker News for tech news, and Daily Hugging Face Top Papers for the latest AI research.
- Supports several connectors: Google Drive, DeepWiki, Hacker News, Daily Hugging Face Top Papers, and Web Search - Voice assistant that can summarize content in Google Drive, search documentation with DeepWiki, browse Hacker News, discover the latest AI research papers, and search the web
- Extensible - Adding connectors is fairly easy. File an issue to request the connector you'd want to see.
- Customizable prompts - Edit system and tool prompts directly from the UI
- Multi-mode audio - Works with speaker, handset, or Bluetooth headphones
- Background noise handling - Mute yourself in loud environments
- Session recording - Optional conversation recording and sharing
- Voice and text modes - Switch between input methods seamlessly
- Observability - Optional Logfire integration for debugging (disabled by default)
- Privacy-focused - Working toward a fully private solution with local execution options
- Cost - OpenAI API costs can add up with extended usage due to context window management
- Text Mode is limited - The Text mode does not support streaming tokens yet. It has a very basic and limited UX.
- Platform - iOS only, no Android support yet due to currently using native WebRTC library, despite using React Native via Expo.
- UX - No progress indicators during operations
- Recording - Optional call recording implementation doesn't work very reliably since it regenerates the conversation based on a text transcript
Privacy status: We're actively working toward a fully private, end-to-end local solution. Currently, the app uses OpenAI's API, which means user prompts and connector content are transmitted to OpenAI by design. Your credentials (API keys, OAuth tokens) never leave your device and are stored securely in iOS Keychain. Future updates will add support for self-hosted and fully local execution options.
To keep all data within your cloud perimeter, Azure OpenAI Service with Private Link can be configured to ensure traffic remains within your virtual network infrastructure.
From a security perspective, the main risks are credential leakage or abuse:
- OpenAI API Key
- Google Drive Auth Token
Mitigation: All credentials remain on-device, stored only in memory or secure storage (iOS Keychain). Audit the source code to verify that no credentials are transmitted externally.
Security + privacy: storage, scopes, and network flow recap
-
All token storage in memory and secure storage happens in
lib/secure-storage.ts -
The actual saving/retrieval of tokens is delegated to the Expo library
expo-secure-store -
Transport security: All outbound requests to OpenAI, Google, and optional Logfire use HTTPS with TLS handled by each provider. This project does not introduce custom proxies or MITM layers.
-
OAuth tokens and API keys are stored via
expo-secure-store, which maps to the iOS Keychain using thekSecAttrAccessibleAfterFirstUnlockThisDeviceOnlyaccessibility level. Tokens are never written to plaintext disk. -
Recording is off by default, and conversation transcripts are not saved. Optional recordings remain on-device and rely on standard iOS filesystem encryption.
-
No third-party endpoints beyond OpenAI, Google, and optional Logfire are contacted at runtime. The app does not embed analytics, crash reporting SDKs, or ad networks.
-
The Google Drive OAuth scope used by the default Client ID in the TestFlight build is read-only—it can create or edit files that the app created, but cannot edit or delete files that originated elsewhere. For tighter control, register your own Google Drive app, supply its Client ID, and grant the permissions you deem appropriate.
-
Assume that connector operations which retrieve file contents may send that content to the LLM for summarization unless you have deliberately disabled that behavior.
Observability logs are disabled by default. Note that these should be automatically scrubbed of API tokens by Logfire itself. Only enable Logfire after you have audited the code and feel comfortable—this is mainly a developer feature and not recommended for casual usage or testing.
Out of scope: This project does not currently defend against (1) on-device compromise, (2) malicious LLM responses executing actions against connected services using delegated tokens, or (3) interception of API traffic by the model provider.
Installation steps
git clone https://github.com/vibemachine-labs/arty.git
cd arty
curl -fsSL https://bun.sh/install | bash
bun installWhen building from source, you will need to provide your own Google Drive Client ID. You can decide the permissions you want to give it, as well as whether you want to go through the verification process.
For testing, the following oauth scopes are suggested:
- See and download your google drive files (included by default)
- See, edit, create, and delete only the specific Google Drive files you use with this app
To run in the iOS simulator:
bunx expo run:iosTo run on a physical device:
bunx expo run:ios --deviceEditing Swift code in Xcode
To open the project in Xcode:
xed iosIn Xcode, the native swift code will be under Pods / Development Pods
Misc Dev Notes
For certain testing scenarios, disable the onboarding wizard by editing app/index.tsx and commenting out the useEffect block that evaluates onboarding status:
useEffect(() => {
let isActive = true;
const evaluateOnboardingStatus = async () => {
try {
const storedKey = await getApiKey();
const hasStoredKey = typeof storedKey === "string" && storedKey.trim().length > 0;
if (!isActive) {
return;
}
setOnboardingVisible(!hasStoredKey);
} catch (error) {
if (!isActive) {
return;
}
log.warn("Unable to determine onboarding status from secure storage", error);
setOnboardingVisible(true);
}
};
if (!apiKeyConfigVisible) {
void evaluateOnboardingStatus();
}
return () => {
isActive = false;
};
}, [apiKeyConfigVisible, onboardingCheckToken]);- Project bootstrapped with
bunx create-expo-app@latest . - Refresh dependencies after pulling new changes:
bunx expo install - Install new dependencies:
bunx expo install <package-name> - Allow LAN access once:
bunx expo start --lan
- Register device:
eas device:create - Scan the generated QR code on the device and install the provisioning profile via Settings.
- Configure build:
bunx eas build:configure - Build:
eas build --platform ios --profile dev_self_contained
If pods misbehave, rebuild from scratch:
bunx expo prebuild --clean --platform ios
bunx expo run:iosArchitecture overview
React Native WebRTC libraries did not reliably support speakerphone mode during prototyping. The native Swift implementation resolves this issue but adds complexity and delays Android support.
All connectors use statically defined tools with explicit function definitions, providing reliability and predictable behavior. Examples include Google Drive file operations, DeepWiki documentation search, Hacker News browsing, and Daily Hugging Face Top Papers discovery.
Not yet implemented since all tools are currently local. Future versions will add MCP server support via cloud or local tunnel connections.
GPT-4 web search serves as a temporary solution. The roadmap includes integrating a dedicated search API (e.g., Brave Search) using user-provided API tokens.
OpenAI is currently the only supported backend. Adding support for multiple providers and self-hosted backends is on the roadmap.
- Address limitations listed above
- Improve text mode support
- Investigate async voice processing to reduce cost
- Add support for alternative voice providers (Unmute.sh, Speaches.ai, self-hosted)
- Remote MCP integration
- TypeScript MCP plugin support
The app itself will remain completely open source, with no restrictions or limitations.
Business model TBD. Likely a managed backend service using either:
- Azure OpenAI realtime APIs
- Fully open-source stack — possibly Unmute.sh or Speaches.ai
- Spread the word - Star github.com/vibemachine-labs/arty, share with friends
- Try it - Run the app and file issues
- Give feedback - Fill out a quick questionnaire (10 questions, 2 mins) or schedule a 15-min user interview
- Contribute ideas - File issues with appropriate labels
- Create pull requests - For larger proposed changes, it's probably better to file an issue first
- Email/Twitter: Email or Twitter/X via my Github profile.
- Issues, Ideas: Submit bugs, feature requests, or connector suggestions on GitHub Issues.
- Discord: A server will be launched if there’s enough interest.
- Responsible disclosure: Report security-relevant issues privately via email using the address listed on my Github profile before any public disclosure.



