Hi there,
This week’s theme seems pretty clear: local AI is having a moment. Whether it’s running your own Manus-style agent, compressing vector databases to fit on a laptop, or sharing your screen without touching a server, there’s a growing appetite for tools that keep you in control.
The interesting thing is these aren’t just proof-of-concepts anymore. These projects are hitting tens of thousands of stars and getting real production use. The privacy-first, local-first movement has graduated from ideology to infrastructure.
In this Monday Morning Mashup:
AgenticSeek: Your Own Local Manus
LEANN: Shrink Your Vector Database by 97%
libSQL: SQLite Without the Contribution Lock
Quick Hits: P2P screen sharing, memory APIs, and more
Have a great week!
AgenticSeek: Your Own Local Manus
Remember Manus AI? The autonomous agent that can browse the web, write code, and plan tasks for you? Well, someone built a fully local alternative that doesn’t require API keys or $200 monthly bills. It’s called agenticSeek, and it’s exploded to over 24,000 stars on GitHub.

The premise is simple but powerful: an AI agent that thinks, browses the web, writes and executes code, all while keeping your data on your device. It works with local LLM providers like Ollama and LM Studio, so you can run it on whatever hardware you have. The project uses SearxNG for web search, has a clean web interface, and even supports voice interaction.
What’s particularly clever is the agent routing system. You describe what you want, and it automatically allocates the best specialized agent for the task, whether that’s web browsing, coding, file management, or complex multi-step planning.
The catch? You’ll need decent hardware. The team recommends at least a GPU with 12GB VRAM for simple tasks, and 24GB+ for anything involving web browsing and planning. But if you’re already running local models, you’ve probably got that covered.
agenticSeek - Fully Local Manus AI
An autonomous agent that thinks, browses the web, writes code, and plans tasks, all running locally on your hardware.
LEANN: Shrink Your Vector Database by 97%
If you’ve ever tried to build a RAG system on local hardware, you’ve probably hit the storage wall. Traditional vector databases store every single embedding, which gets expensive fast. Index 60 million text chunks and you’re looking at 201GB of storage.

LEANN flips this on its head. Instead of storing all embeddings upfront, it stores a pruned graph structure and computes embeddings on-demand only when you search. The result? That 60 million chunk index drops to just 6GB. That’s a 97% reduction.
The project comes from Berkeley’s Sky Computing Lab and already has nearly 10,000 stars. What makes it practical is the ecosystem they’ve built around it: you can RAG your emails, browser history, WeChat messages, ChatGPT conversations, and even your entire codebase. There’s MCP integration for Claude Code, CLI tools, and Python APIs.
The compression comes from “high-degree preserving pruning,” which keeps the important hub nodes in your search graph while removing redundant connections. During search, it only computes embeddings for nodes along the search path, not the entire index.
LEANN - The smallest vector index in the world
97% storage savings while running fast, accurate, 100% private RAG on your personal device.
libSQL: SQLite Without the Contribution Lock
SQLite is everywhere, embedded in basically every device with a processor. But here’s the thing: it famously doesn’t accept external contributors. Your brilliant optimization or feature idea? You can fork it, but it’s not getting merged upstream.
libSQL, created by Turso, is trying to change that. It’s an open-source, open-contribution fork that maintains file format compatibility while adding features the community actually wants. With over 16,000 stars, it’s become a serious project.
The additions are practical: embedded replicas (replicate your database inside your app), a server mode for remote access like PostgreSQL, WebAssembly user-defined functions, and native support for Rust, JavaScript, Python, and Go. The ALTER TABLE extension for modifying column types is something SQLite purists have wanted for years.
Most importantly, it promises compatibility: if you don’t use the new features, your database files remain standard SQLite format, readable by any SQLite tool.
libSQL - Open Source, Open Contribution SQLite
A fork of SQLite that welcomes community contributions while maintaining file format compatibility.
Quick Hits
Bananas - Peer-to-peer screen sharing without accounts or servers. It uses WebRTC for the actual P2P connection (still needs STUN/TURN servers for connection negotiation), but your screen data flows directly between devices. Works on Mac, Windows, and Linux. Sometimes the simplest tools are the best ones.

supermemory - Calling itself “the Memory API for the AI era,” this is a fast, scalable memory engine for AI agents. Add memories from URLs, PDFs, plain text, connect it to Notion, Google Drive, and use it through browser extensions or Raycast. Over 14,000 stars and growing. The MCP integration means Claude and Cursor can directly access your memory store.
WaterCrawl - A web crawler specifically designed to transform web content into LLM-ready data. Uses Python, Django, Scrapy, and Celery. Has integrations with Dify and N8N if you’re building automation workflows. Useful if you’re building your own AI training pipelines or knowledge bases.
OpenContext - A personal context store for AI agents. The interesting angle here is it works with your existing coding agent CLI (Codex, Claude Code, OpenCode) rather than replacing them. Adds a GUI plus built-in skills so agents can read, search, create, and iterate on your knowledge without changing your workflow.
Have a great week!