Hi there,
This week the agent and assistant world started to look like an actual industry instead of a research demo. OpenAI quantified what 700 million people are really doing with ChatGPT, an HKUDS personal tutor crossed 20k stars in days, the “Markdown vault as agent memory” pattern got a real implementation, and a fresh trends list shows voice AI and self-evolving agents pulling away from everything else. The week tied a lot of recent threads together - skills, harnesses, memory, personalization - and made the shape of the next twelve months noticeably clearer.
📃 In this Monday Morning Mashup:
⭐Highlight: OpenAI publishes the first real study of how 700M people actually use ChatGPT
🤖AI: DeepTutor turns “personal tutor” into a one-install open source product
🧠Memory: ByteRover ships the Markdown-vault agent memory Karpathy was just describing
🔧Tools: HKUDS’s OpenHarness collapses skills, memory, and governance into one CLI
Have a great week!
⭐Highlight: 73% of ChatGPT usage is personal, not work - the productivity narrative was always wrong
OpenAI just published the first comprehensive study of how 700 million people actually use ChatGPT, and the numbers reframe most of the public discussion. Only 27% of usage is work-related; 73% is personal, and the gap is widening month over month. The top three buckets are practical guidance (29%) - learning, how-to, tutoring - then seeking information (24%) as a Google replacement, then writing (24%) for emails, documents, and content. Coding and business automation, the things AI Twitter talks about most, are well down the list.
The implications matter for builders. The bulk of demand is for an everyday assistant that helps a normal person learn something, look something up, or rewrite something - not a coding agent with seventeen MCP servers attached. That maps neatly onto the rest of this week’s stories: a personal tutor crossing 20k stars, a Markdown-based memory system optimized for personal “second brains”, an agent harness whose flagship product is a chat companion in Slack and Telegram. The mass-market AI app is no longer hypothetical - it is a tutor, a writer, and a search engine, and the open source ecosystem is starting to ship credible versions of all three.

OpenAI’s first 700M user study
A breakdown of OpenAI’s published study on real ChatGPT usage: 73% personal versus 27% work, the top three categories (practical guidance, information seeking, writing), and what that means for the productivity narrative.
🤖AI: DeepTutor takes the “personalized AI tutor” idea to 20k stars
HKUDS/DeepTutor went from a Twitter thread to roughly 20k stars in a few days. It is described as “Agent-Native Personalized Tutoring” and is, in practice, a fully featured tutoring app you can run yourself: a Question Notebook with bookmarks and categories, a Visualize capability with Chart.js, SVG, and Mermaid diagrams, LaTeX block math parsing, embedding-provider registry, RAG with knowledge bases, support for Qwen / vLLM / LM Studio / llama.cpp / o4-mini and others, plus the usual streaming chat, themes, and bookmarkable URL-based sessions. The release notes show daily versions through April with consistent, focused improvements - this is being run like a real product.
The fair criticism in the replies is that it is single-tenant: one installation per learner, not a SaaS-grade multi-user system. That is the right level for what it is though - a personal tutor on your own machine, not a Khan Academy replacement. Combined with the OpenAI usage data above, where “practical guidance / learning” is the single largest use case, DeepTutor is one of the clearest examples I have seen of open source filling a slot that the closed-source incumbents have been weirdly slow to address.

An open source agent-native personalized tutoring app with notebooks, visualizations, RAG, broad model support and a release cadence to match.
🧠Memory: ByteRover ships the Markdown-vault memory pattern as a real CLI
If you have followed the agent-memory thread from MMM #43 onward, this week’s release closes a loop. ByteRover open sourced an agent memory system that operates as an Obsidian-compatible Markdown vault: tiered retrieval pulls only the relevant chunks of context instead of dumping whole files, nodes and links and context graphs are maintained automatically, and the whole thing works against OpenClaw, Hermes, Claude Code, and similar harnesses without a vector database in sight. The team claim 50-70% token savings on average and benchmark on Locomo and LongMemEval.
What is interesting beyond the numbers is the alignment with where the conversation has been going. Karpathy’s “bespoke dashboards” idea, the Sirchmunk argument from MMM #45, and now ByteRover all converge on the same conclusion: store agent memory in human-readable, file-system-native formats and let the agent do retrieval intelligently rather than precomputing embeddings. If you have an existing personal Obsidian vault, the offer here - “use this as your agent’s second brain, free, no infra” - is hard to argue with.
ByteRover: Open-source memory for agents
Open source agent memory built on Markdown / Obsidian vaults with tiered retrieval, no vector DB, native team sync, plus benchmarks on Locomo and LongMemEval and a CLI install.
🔧Tools: HKUDS’s OpenHarness packages skills, memory, and governance into one CLI
The other HKUDS release this week is HKUDS/OpenHarness at around 10.4k stars, alongside its built-in personal agent ohmo. OpenHarness pitches itself as “core lightweight agent infrastructure”: a streaming tool-call loop with retry and parallel execution, 43 built-in tools (file, shell, search, web, MCP), on-demand skill loading from .md files compatible with Anthropic skills and plugins, CLAUDE.md discovery and injection, MEMORY.md persistent memory, multi-level permission modes with hooks and approval dialogs, and subagent spawning and team registry for swarm coordination. ohmo on top of it works in Feishu, Slack, Telegram, and Discord, forks branches, runs tests, and opens PRs - all on top of your existing Claude Code or Codex subscription.
This is the most explicit “harness as platform” release I have seen so far. Where Aperant in MMM #46 wrapped Claude Code with kanban-style task management, OpenHarness goes one level deeper: it is the substrate that other CLI agents (OpenClaw, nanobot, Cursor) plug into. Pair it with last week’s claw-code, this week’s ByteRover memory, and MIRAGE’s pushback on benchmark theater, and the picture is consistent: the meaningful innovation right now is the layer between you and the model, not the model itself.

An open agent harness with skills, memory, swarm coordination, governance hooks, and a built-in personal agent (ohmo) that works across Slack, Telegram, Discord and Feishu on top of an existing Claude Code or Codex subscription.
⚡Quick Hits
Fastest-growing GitHub repos this week - Sharbel’s weekly list, with the headlines being microsoft/VibeVoice (+11.1k, voice cloning and 60-min transcription in one pass), bytedance/deer-flow (+9.0k, Bytedance’s open SuperAgent), and NousResearch/hermes-agent (+8.8k, self-evolving memory). Theme of the week, in his words: voice AI and self-evolving agents.
x.com
Gemma 4 26B is pretty good with long context - A community report that the 26B Gemma 4 holds up well at long context, which together with Unsloth’s bug fixes from MMM #45 makes the 26B variant a real option for long-document workflows on a single big-VRAM card.
reddit.com
”Wow, the demand curve for local inference just…” - A LocalLLaMA thread on a sudden uptick in local-inference interest after several weeks of frontier-API price cuts. Useful as a sanity check on the “is local still worth it” question.
reddit.com
meituan-longcat/LongCat-AudioDiT - Meituan’s open audio diffusion transformer, aimed at high-quality long-form audio generation. Part of the broader voice-AI wave that is pulling clear of the rest of the open ecosystem this month.
github.com
k2-fsa/OmniVoice - From the k2-fsa team, an “omni” voice toolkit covering ASR, TTS, and related audio tasks in a single repo. A reasonable starting point if you want to roll your own voice stack instead of stitching three projects together.
github.com
Have a great week!