top of page

Welcome!

We are very exicited to welcome you to the Au-Zone family and can't wait to see what exciting projects you come up with the help of the Maivin AI Vision Starter Kit!

To get you started please click the link below and which will take you to the Au-Zone Zendesk platform. Once there you will need to register your email address and a password to gain access to the site. 

This is where you find your quick start guide, product support and technical information. 

PXL_20220209_172030908-removebg-preview.png
Anchor 1

Maivin Videos

Maivin AI Vision Starter Kit
Detailed Tear Down Video

A detailed teardown of the Maivin AI Vision Starter Kit by Au-Zone Technologies.

Maivin AI Vision Starter Kit - Click Here

Maivin AI Vision Starter Kit
Unboxing & Setup

Join Au-Zone Technologies for a detailed unboxing and setup video for the Micro AI Vision Starter Kit.

EdgeFirst Starter Kit | Micro - Click Here

Product Information

Documents

Product Brief - Maivin AI Vision Starter Kit

The Maivin AI Vision Starter Kit is a modular AI smart camera platform built on NXP’s i.MX8MPlus Applications Processor and production grade hardware and software components to enable rapid prototyping and field deployment of custom Vision Solutions. The Maivin targets applications where compute performance is the priority.

Product Brief - EdgeFirst Model Pack for Object Detection

EdgeFirst Model Pack is a bundle of state-of-the-art detection models pretrained with COCO and OpenImages and has been fully tested & optimized for NXP RT Crossover MCU’s and i.MX8 Application Processors.

Product Brief - EdgeFirst Vision Pack for Application Processors

The Vision Pack for Apps Processors provides developers with an end to end, hardware accelerated video pipeline for optimized AI based Vision applications. Fully integrated with EdgeFirstRT inference engine for the high performance / low overhead AI vision solution on Applications Processors.

EdgeFirstRT Benchmark Data

The EdgeFirst RT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.

Product Information
Anchor 2
bottom of page