
Description
edge-ml is a fully managed SaaS (software as a service) solution for deploying, executing, and monitoring machine learning models directly on edge devices. It is based on an open-source and browser-based toolchain for machine learning on microcontrollers developed at KIT. Designed for low-latency inference and efficient on-device processing, the platform provides scalable infrastructure, accelerated runtimes, and robust MLOps workflows tailored for AI at the network edge. It eliminates the complexity of managing device-specific integrated development environments (IDE) and enables real-time, confidential, privacy-preserving ML computation without depending on cloud round trips on low-cost microcontrollers.
Target users
- Companies or researchers aiming to run AI workloads on distributed or resource-constrained devices
- Developers and ML engineers needing automated deployment pipelines for edge inference
- Enterprises that must limit data transfer for privacy, bandwidth, or regulatory reasons
How it works
Central to edge-ml is our flow. With a few simple steps edge-ml lets you record data, label samples, train models, and deploy validated embedded machine learning directly from the web-browser to the edge.
edge-ml requires minimal initilization and supports upload in real-time or in bulk from the edge. Pre-recorded data can transferred as CSV files through a simple drag-and-drop interface to the edge-ml cloud storage. edge-ml AUTO performs neural architecture search to find the best neural network for your use case. (This feature is currently only available to alpha users on request.)
edge-ml operates following a straightforward lifecyle:
- Collect — Use our open-source libraries to collect data and push it to the edge-ml cloud.
- Manage — Manage and delete datasets or synchronize different sensor streams
- Label — Use the web-based labeling tool to add or refine data labels and annotations.
- Train — Train embedded models with cloud- and HPC-based computing resources.
- Validate — Receive detailed reports about model performance metrics.
- Deploy — Port your optimized edge model back to the embedded platform.
Applications
Our SaaS-offering is particularly valuable for companies, with use cases including:
- On-device machine status/failure recognition
- Human-Machine interaction and user awareness (e.g. gestures, activities)
How to get access
Beta-Access available via https://beta.edge-ml.org
Disclaimer: Please note that edge-ml is still under development and in beta stage. HammerHAI can not guarantee its quality of service. Please contact us for production-ready deployments.
Contact
Tobias King, KIT
Funding
- The service is offered via HammerHAI (European High Performance Computing Joint Undertaking, Grant No. 101234027) free of charge.
- The service was developed with support of state of Baden-Württemberg within the Competence Center AI Engineering and the BMFTR within the Smart Data Innovation Lab








