It's Wednesday, March 18th: This week, builders can run AI agents directly on their local machines with Manus Desktop, lock down agentic workflows with NVIDIA's new security stack for OpenClaw, and parallelize complex coding tasks by spawning subagents inside OpenAI Codex.

Head over to our Events Portal to get the latest on upcoming AI Collective events near you. Search by city, date, or event format, and join thousands of builders at events across 100+ chapters on every continent (except Antarctica, for now).
🌁 Based in SF? Check out SF IRL, MLOps SF, GenerativeAISF, or Cerebral Valley’s spreadsheet for more!

In Today’s Top Tools, we spotlight some of the most innovative, creative AI apps that we recommend adding to your stack.
1️⃣ Manus Desktop Lets Your AI Agent Take the Wheel on Your Actual Computer

Most AI tools live in a browser tab. You paste something in, get something back, and then do the real work yourself. Manus just changed that. Their new “My Computer” feature turns the Manus agent into something that operates directly on your local machine: running terminal commands, reading and editing your files, launching applications, and using whatever dev tools you already have installed. It’s the difference between an assistant that talks about work and one that actually does it.
What it can do
The scope here is broad, and intentionally so.
Execute CLI commands in your terminal (Python, Node, Swift, Xcode, anything you have installed)
Read, analyze, and edit local files without uploading them anywhere
Launch and control desktop applications
Run ML model training or LLM inference on your own GPU
Real use cases, not demos
Manus shared a few examples that suggest where this is heading.
A florist used it to scan thousands of unsorted photos, identify contents, and sort them into categorized folders automatically
A colleague had the agent build a real-time meeting translation app in Swift in twenty minutes, without manually opening Xcode
Batch-rename hundreds of invoices to standardized formats in minutes instead of hours
How it handles trust
This is the part that matters most for anyone giving an AI agent terminal access.
Every command requires explicit user approval before execution
You can choose “Always Allow” for trusted tasks or “Allow Once” for individual review
Cloud integrations (Gmail, Google Calendar) can bridge with local resources, so you can pull files from a home machine while working remotely
2️⃣ NVIDIA NemoClaw Adds Security Guardrails to Your Autonomous Agents

If you’ve been building with OpenClaw, you’ve probably felt the gap: the agents work, but there’s no real infrastructure for controlling what they can access, what data they touch, or how they behave when things go sideways. NVIDIA just announced NemoClaw at GTC 2026 to fill that gap. It’s a security and privacy stack that sits underneath your agents and gives you policy-based controls, sandboxed execution, and a privacy router for cloud model access. One command installs the full stack, including Nemotron models and the OpenShell runtime.
What’s in the stack
NemoClaw bundles several layers into a single install.
NVIDIA Nemotron models for on-device inference
OpenShell runtime for isolated agent execution
Policy-based security guardrails that define what agents can and can’t do
A privacy router that controls how and when agents reach cloud models
Network-level privacy controls enforced at the infrastructure layer
Where it runs
The hardware support is wide, which matters for teams with mixed setups.
GeForce RTX PCs and laptops
RTX PRO workstations
DGX Station and DGX Spark AI supercomputers
Cloud and on-premises environments
Why this matters for builders
Jensen Huang called OpenClaw “the operating system for personal AI,” and it’s been one of the fastest-growing open source projects in recent memory. But production agents need more than orchestration. They need permission boundaries, audit trails, and failure isolation. NemoClaw is the first serious attempt at making that a single-command install rather than something every team has to build from scratch.
3️⃣ OpenAI Codex Subagents Let You Parallelize Complex Coding Workflows

Anyone who’s used Codex for larger tasks knows the bottleneck: one agent, one thread, working through problems sequentially. Subagents change that. Codex can now spawn specialized agents that work in parallel and report back with consolidated results. You tell Codex what you need, it breaks the work into parallel tracks, assigns each to a purpose-built agent, and stitches the outputs together. It’s particularly useful for codebase exploration, multi-step feature implementation, and any workflow where waiting for one step to finish before starting the next is a waste of time.
Built-in agent types
Three roles ship out of the box.
default: general-purpose fallback for anything that doesn’t fit a specific patternworker: optimized for execution and implementation tasksexplorer: read-heavy, built for codebase exploration and analysis
Custom agents
You can define your own using TOML config files.
Store them at
~/.codex/agents/(personal) or.codex/agents/(project-scoped)Override model selection, reasoning effort, sandbox mode, and MCP server configurations
Default concurrency: 6 open threads, with configurable caps
Max nesting depth defaults to 1 to prevent runaway recursion
Batch processing (experimental)
There’s also an early spawn_agents_on_csv tool for processing multiple rows with dedicated workers. Output gets combined and exported to CSV with job tracking metadata. It’s experimental, but the pattern points toward Codex becoming a proper orchestration layer, not just a code completion tool.

In this section, we feature a few standout opportunities from companies building at the edge of AI. Each role is selected for impact, growth potential, and relevance to our community.
Founding Engineer, Convexia, San Francisco / Remote (90K−90K−200K, 0.50% - 1.50%): “You’ll build the agentic frameworks powering AI-driven drug asset evaluation for pharma teams, from orchestration and tool calling to failure recovery, at a YC S25 company replacing months of manual diligence with automated in silico analysis.”
Senior ML / AI Engineer, Confido, New York (200K−200K−250K + 40% Bonus, 0.15% - 0.40%): “You’ll own the full ML lifecycle at a 4x-YoY-growth fintech building AI agents that automate thousands of hours of retail revenue accounting for brands like Olipop and Baskin Robbins, fresh off a $15M Series A.”
AI/ML Engineer, DeepAware AI, San Francisco / Remote (130K−130K−170K): “You’ll design reinforcement learning models for GPU workload scheduling and anomaly detection at a YC S25 startup making data centers more autonomous, backed by second-time founders with Siemens and Sumitomo experience.”
Founding Software Engineer, Simple AI, San Francisco (100K−100K−250K, 0.50% - 2.50%): “You’ll push the limits of voice AI as the second hire at a company doubling revenue month over month, backed by angels including the co-founder of Twitch and the CEO of Scale AI.”
📝 Community Notes
MiniMax x AI Collective: A room worth being in this GTC Weekend

GTC brings everyone together in the SF bay area, but the real value is the rooms you choose. This Saturday, MiniMax is teaming up with AI Collective to bring together a curated group of founders & builders for a high-signal gathering.
MiniMax is one of the few AI labs building full-stack systems across foundation models, multimodal generation, and agent systems powering real-world production. If you haven’t been following them closely, now is a good time to start paying attention.
Onstage:
A live model launch (first look, in the room)
Cofounder Yeyi Yun + open AMA
A room full of people actually building
We reserved a limited number of spots for AI Collective members, completely free. GTC doesn’t happen every week — don’t miss your chance to join the conversation.
Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.
🌁 HumanX 2026 — April 6-9

HumanX 2026 (April 6–9) brings a concentrated slice of the AI ecosystem into one building in San Francisco. The speaker and attendee list spans Fei-Fei Li, Andrew Ng, Ray Kurzweil, founders from Databricks, Replit, Pika, Cohere, ElevenLabs, Cerebras, and CEOs from AWS, Snowflake, Zoom, along with partners from a16z, Greylock, Kleiner Perkins, General Catalyst, and hundreds more.
Last year, founders walked away with Series A rounds and enterprise partnerships that started as hallway conversations or demo-booth follow-ups. This year, The AI Collective will be on-site running 18+ programs and hosting a major exhibit on the floor, giving our community a clear home base inside the conference. With roughly 70% of attendees at VP-level and above, the value is less about volume and more about the density of decision-makers across industry, startups, and capital.
If you’re actively building or leading in applied AI, this is one of the rare weeks where your users, partners, and future investors are literally in the same building.
Our Premier Partner: Roam

Roam is the virtual workspace our team relies on to stay connected across time zones. It makes collaboration feel natural with shared spaces, private rooms, and built-in AI tools.
Roam’s focus on human-centered collaboration is why they’re our Premier Partner, supporting our mission to connect the builders and leaders shaping the future of AI.
➡️ Before You Go
Partner With Us
Launching a new product or hosting an event? Put your work in front of our global audience of builders, founders, and operators — we feature select products and announcements that offer real value to our readers.
👉 To be featured or sponsor a placement, reach out to our team.
The AI Collective is a community of volunteers, made for volunteers. All proceeds directly fund future initiatives that benefit this community.
Stay Connected
🧑💼 LinkedIn: The AI Collective
𝕏 Twitter / X: @AICollectiveCo
Get Involved
About Joy Dong
Joy is a news editor, writer, and entrepreneur at the forefront of the emerging tech landscape. A former educator turned media strategist, she demystifies complex systems to make AI and blockchain accessible for all. Joy is on a mission to explore how decentralized technology and artificial intelligence can be leveraged to build a more innovative and transparent future.
About Noah Frank
Noah is a researcher, innovation strategist, and ex-founder thinking and writing about the future of AI. His work and body of research focus on aligning governance strategies to anticipate transformative change before it happens.


