Qeexo

Software : Edge Computing : AI Inference

Website | Blog | Video

Mountain View, California, United States

VC

Qeexo is the first company to automate end-to-end machine learning for edge devices. Its Qeexo AutoML platform provides an intuitive UI that allows users to collect, clean, and visualize sensor data and automatically build “tinyML” models using different algorithms. Spun out of Carnegie Mellon University, Qeexo is venture-backed and headquartered in Mountain View, CA, with offices in Pittsburgh, Shanghai, and Beijing.

Assembly Line

A Step by Step Guide to Robot Arm Demo

Date:

Topics: Robotic Arm

Organizations: Qeexo

Assuming we are operating a smart warehouse optimized for an e-commerce company. In the warehouse, we employ several, “intelligent robots mover” to help us to move objects from spot to spot. In this demonstration, we have used a miniaturized, “intelligent robots mover” powered by Qeexo AutoML to determine whether the robot griped an object.

This blog is intended to show you how to use Qeexo AutoML to build your own, “intelligent robots mover” from end to end, including data collection, data segmentation, model training and evaluation, and live testing.

Read more at Qeexo Blog

Tree Model Quantization for Embedded Machine Learning Applications

Date:

Author: Leslie J. Schradin

Topics: edge computing, machine learning

Organizations: Qeexo

Compressed tree-based models are useful models to consider for embedded machine learning applications, in particular with the compression technique: quantization. Quantization can compress models by significant amounts with a trade-off of slight loss in model fidelity, allowing more room on the device for other programs.

Read more at Qeexo

Building effective IoT applications with tinyML and automated machine learning

Date:

Authors: Rajen Bhatt, Tina Shyuan

Topics: IIoT, machine learning

Organizations: Qeexo

The convergence of IoT devices and ML algorithms enables a wide range of smart applications and enhanced user experiences, which are made possible by low-power, low-latency, and lightweight machine learning inference, i.e., tinyML.

Read more at Embedded