Forum Ventures' AI Studio has launched 17 companies since 2023. We don't wait for founders to bring us ideas. Instead, we go looking for the problems worth solving, find the founders who've lived them, and build together from day one.
Right now, we're obsessed with one pattern: the infrastructure layer that should exist in critical industries but doesn't. Teams are rebuilding the same foundational code on every deployment. Floors are going down because operators lack the right signal. Engineers are doing manually what software solved a decade ago everywhere else.
We wrote about where this is happening in Edge AI here. Today we're going deeper into four more.
This is our Call for Founders. We're looking for the people who've been inside these problems long enough to know how to make them disappear. If you've worked in one of these spaces and have a point of view, we want to hear it. Even if that point of view is that we're wrong.
-
Robotics teams are rebuilding the same sensor integration code on every deployment. Factory floors are going down because operators can't tell which alert matters. Industrial engineers are still uploading PLC code by hand. Edge AI teams are running inference on a dozen incompatible chipsets with no shared tooling, no observability, and no abstraction layer between them.
Different industries. Different problems. Same pattern. The infrastructure layer that should exist doesn't. And every team that needs it is building it themselves, badly, from scratch.
We're looking for founders who can see exactly where that layer is. Who have lived inside one of these problems and know what it would take to make it disappear.
Here are four areas where we think this is happening right now. If you think we're wrong about any of these, or if you see the opportunity differently, we want to hear from you. Show us why our thesis is incomplete or how you'd approach it better. And if you want to build with us, let us know why you're the right founder for the job.
1. Robotics Middleware
Sensor fusion is foundational infrastructure treated like custom engineering. It wastes time, costs money, and breaks in production.
Developers building production robotics hit the same wall on every deployment. Low-level sensor drivers written from scratch. Adapters rebuilt. DDS configurations tuned by hand. Calibration logic rewritten. This isn't novel engineering. It's the same work, done by every team, on every deployment, because no standardized layer exists. When it breaks, timestamps drift, QoS misconfigures, latency goes inconsistent, sensor fusion fails. Bad perception. Bad decisions. Robots that fail in the field, or worse, operate unsafely.
Current systems: custom adapters, hand-tuned configurations, synchronization logic rebuilt from scratch on every deployment.
Why this works now: Multi-sensor stacks are the new default. Cameras, LiDAR, IMUs, and GNSS running together on every serious deployment. The complexity that creates is no longer a one-time engineering problem, it's a recurring cost on every project. Defense validated the demand years ago and absorbed it because they had the budget. Industrial and manufacturing robotics is now scaling fast and those teams don't. The layer needs to exist. Nobody has built the software-only, hardware-agnostic version yet.
The wedge: Become the sensor fusion layer every robotics team consumes instead of builds.
The expansion: Sensor fusion is the data foundation. Own it, then expand into the constraint layer. Safety boundaries, hard limits on actuator commands. Then the policy layer, permissible robot behavior. Then the full middleware stack sits between hardware and AI models across every robotics vertical. Own the data layer, build everything on top.
You're the right founder if: You've built production robotics systems. You've spent weeks on sensor integration before touching application logic. You know what DDS tuning costs in engineering time. You've shipped a robot and know exactly which layer broke first.
2. Agentic Incident Management for Manufacturing
Factory floors generate thousands of alerts every day. At $260,000 an hour in unplanned downtime, operators can't afford to guess which one matters.
Manufacturing operations run on a fragmented stack: SCADA, PLC, MES, Historian, quality vision systems, predictive maintenance platforms, CMMS, environmental sensors. None of them talk to each other in a way that produces a coherent picture of what's happening on the floor. When something goes wrong, an operator gets an alarm. Then another. Then twenty more. Correlating those signals into a root cause and executing a safe response is almost entirely manual and almost entirely reactive.
Cybersecurity already solved this problem for IT environments. Agentic SOC platforms like D3 now handle alert triage, investigation, and remediation with AI agents and human-in-the-loop controls. Manufacturing operations haven't had their equivalent. The problem is identical in structure: too many signals, not enough context, decisions made too slowly.
Why this works now: The model is already proven. Agentic SOC platforms have demonstrated that AI agents can handle alert triage, investigation, and remediation at scale in IT environments. Fabrix.ai built exactly this for IT/NOC operations and raised ~$39M doing it. Nobody has applied it to the OT side. The manufacturing floor has the same problem, higher stakes, and no equivalent solution.
The wedge: Ingest every signal. Surface what matters. Act before the line goes down.
The expansion: Incident response is the wedge. The opportunity is becoming the operating system for the factory floor. Every SOP, every anomaly baseline, every remediation workflow running through one intelligent layer that gets smarter with every incident it handles.
You're the right founder if: You've worked in manufacturing operations, OT engineering, or industrial automation. You've stood in front of an alarm console with twenty active alerts and had to decide in real time which one mattered. You understand the difference between an IT incident and one where the wrong remediation action shuts down a production line.
3. The Control Plane for Industrial Automation
Audi built a dedicated initiative with Broadcom, Cisco, and Siemens to move to virtual PLCs. Most manufacturers will get there eventually. None of them are ready today. The control plane that bridges that gap is what we're building.
Programmable Logic Controllers control every assembly line, conveyor system, and robotic cell across every major manufacturing vertical. But the software workflow around PLC code is completely broken. Updates require physical access to hardware. No version control. Testing happens on live systems. Engineers manually reconfigure logic across dozens of devices every time something changes. Downtime during updates is measured in hours. In large facilities, that's millions of dollars.
The transition to virtualized PLCs is coming. Siemens, CODESYS, and Phoenix Contact have all released containerized vPLC products in the last two years. But the market is nascent. No security certifications. No sub-1ms control task support. No code portability across runtimes. Hardware PLCs aren't going anywhere yet.
The opportunity is in the layer above. Whether that's AI-assisted PLC development, version control for PLC fleets, prompt-to-PLC code generation, or the full control plane, the workflow around industrial automation is broken and the layer above it is wide open.
Why this works now: Major PLC vendors have embraced open ecosystems, APIs, and software-driven integrations. The door to build above the runtime layer is open. Upper-mid-market industrial operators have the pain and the agility to adopt new tooling faster than large OEMs. Industrial DevOps raised $10M in 2022. OTee raised $1.5M in 2024. The category is real but not yet crowded.
The wedge: Own how engineers write and deploy PLC logic. The first team that ships a PLC update without downtime is the reference customer.
The expansion: Own the developer workflow now. As vPLC infrastructure matures through the late 2020s and latency constraints resolve, the platform naturally evolves into the operating environment for virtualized control logic. Own the runtime later.
You're the right founder if: You've worked in industrial automation, OT engineering, or manufacturing operations. You've manually uploaded PLC code to hardware on a factory floor. You know exactly what it costs when that process breaks and you can see why the DevOps workflows that transformed software development haven't touched this world yet.
4. The Infrastructure Layer for Edge AI
AI inference is leaving the data center. Autonomous vehicles, industrial robots, AR/VR headsets, and smart sensors are all running models at the edge, where power budgets are tight, thermals are constrained, and connectivity is unreliable. The infrastructure assumptions that work in a cloud rack collapse at the edge, memory bandwidth becomes a bottleneck, latency requirements are hard, and the hardware stack is fragmented across dozens of incompatible platforms, chipsets, and runtimes.
SK Hynix and Micron are building next-generation DRAM for edge workloads; The silicon is coming. What doesn't exist yet is the software and tooling layer that makes that hardware usable: frameworks that abstract across platforms, memory managers that optimize dynamically, observability stacks that surface what's happening inside a deployed model running on a sensor or a vehicle.
Why this works now: Edge AI is no longer experimental. Inference at the device level is a production requirement across automotive, robotics, and industrial verticals. The hardware buildout is accelerating, but the developer experience around it is still largely bespoke. Every team rebuilds memory management, platform abstraction, and monitoring logic from scratch. The fragmentation across chipsets and runtimes means no single vendor's tools cover the stack.
A few examples:
Edge Observability and Developer Tools
Teams deploying models at the edge have no visibility into what those models are doing in the field. Inference latency, memory pressure, thermal headroom: none of it is legible. When a model degrades, debugging is manual and slow.
The opportunity: guarantee real-time observability across every device in a fleet. Start with industrial robotics and autonomous vehicles where the cost of failure is highest, then expand into regulated verticals requiring safety and audit infrastructure.
Memory Management and Testing for Edge Deployments
Edge devices operate in conditions data center memory was never designed for: temperature extremes, vibration, extended duty cycles. Memory allocation across a fleet is static and manual. Performance degradation goes undetected until failure.
The opportunity: dynamic memory allocation and fleet-level optimization, plus tooling that validates performance under real operating conditions, not just in a lab.
Hardware-Agnostic Frameworks
A model that runs on one edge chipset requires substantial rework to run on another. There is no universal SDK, no abstraction layer that makes inference logic portable. Vendor lock-in is the default, not a choice.
The opportunity: own the abstraction layer between model and hardware. Once you sit there, everything else, optimization, observability, compliance, layers on top.
Memory Architectures and Processing-in-Memory
For vision processing, sensor fusion, and real-time control, the cost of moving data between memory and compute is measured in power, latency, and heat. Data center DRAM was not designed for these tradeoffs. Application-specific memory systems for edge workloads don't yet exist at scale.
The opportunity: build memory systems, or the software layer above them, optimized for one edge workload done dramatically better than general-purpose DRAM allows.
You're the right founder if: You've shipped inference at the edge and hit the wall on memory, latency, or platform lock-in. You've debugged a model that ran fine in the cloud and failed on the device. You know the gap isn't the model. It's the infrastructure underneath it.
How to Pitch Us
At Forum’s AI Venture Studio, we don't need polished business plans. We want your point of view.
If you've experienced one of these problems and have a perspective on how to solve it, pitch us your thinking. Better yet, if you think we're wrong about how to approach these opportunities, tell us why. Show us what we're missing.
We'll work together to validate the market, find design partners, and build the case for incorporation. Market diligence, problem-market fit validation, studio fit assessment, finding design partners, building the narrative. By the end, we both know if there's a company to build.
The deal: $250K investment plus our full studio team. Product, engineering, growth, and recruitment working alongside you to get from zero to momentum.
We've launched 17 companies this way. We know what it takes to get from idea to funded company.
"The founders we want to co-build with are the ones who have spent ten years inside an industry, watched the same broken process repeat itself, and finally decided to be the one who fixes it. That depth is not something you can fake, and it's not something we can provide. It's the edge that makes everything else we do together actually work." - Alice Krenitski, Head of Forum’s AI Venture Studio
If you're one of those founders, we want to talk.
Frequently Asked Questions
What is Forum Ventures' AI Studio?
Forum Ventures' AI Studio co-builds AI-first B2B companies alongside founders from day one. The Studio invests $250K at formation and provides a full team across product, engineering, GTM, sales, marketing, design, and finance. Since launching in 2023, it has co-founded 17 companies.
What types of companies is Forum Ventures' AI Studio looking to co-build?
Forum's AI Studio is industry-agnostic and continuously developing new theses across B2B. Right now, the team is going deep on infrastructure gaps in four areas: robotics middleware, agentic incident management for manufacturing, the control plane for industrial automation, and edge AI deployments. These reflect where the Studio sees the most urgent opportunity today -- not the limits of what it builds.
How much does Forum Ventures invest at formation?
Forum Ventures invests $250K at company formation, alongside a full co-building studio team. Founders retain board control throughout the process.
What kind of founder is Forum Ventures' AI Studio looking for?
Forum looks for founders with deep domain expertise in the problem they want to solve -- people who have worked inside an industry long enough to know exactly where it's broken and why existing solutions fall short. Hands-on experience matters more than a polished plan. The Studio is always open to founders with bold, original B2B ideas, even outside current focus areas.
Do I need a finished business plan to apply?
No. Forum does not require a polished business plan. They want your perspective on the problem and how you'd approach solving it. Together, you'll validate the market, find design partners, and build the case for incorporation.
What is the co-building process with Forum Ventures?
Forum works alongside founders on market diligence, problem-market fit validation, studio fit assessment, finding design partners, and building the company narrative. The goal is that both parties know whether there's a company worth building before committing to incorporation.
About Forum Ventures' AI Studio
Forum Ventures' AI Studio builds and launches AI-first B2B companies alongside founders from inception. We invest $250K at formation and provide a full co-building team across product, engineering, GTM, sales, marketing, design, and finance. Founders retain board control and build with the support of a team that's done this before.
Since launching in 2023, we've co-founded 17 companies. 63% of our Studio portfolio has raised follow-on capital within 12 months of formation, beating our own historic average of less than 50% over 18 months.
We're the only early-stage firm investing from pre-idea through early growth, across our AI Venture Studio, Accelerator, and Pre-Seed Fund.
Learn more at forumvc.com/ai-studio.
.avif)








.avif)
.avif)
