“AI ASSISTED CALIBRATION OF IMUS”
IP Approach is pleased to present the exclusive patent for sale “AI assisted calibration of IMUS” which includes U.S. Granted Patent US 12,372,376 B1 assigned to AIM Design LLC. The IP relates to inertial measurement unit (IMU) sensor networks with integrated artificial intelligence processing. Specifically, the invention encompasses advanced sensor systems that utilize deep neural network architectures to process raw sensor data for enhanced motion tracking, position estimation, and navigation applications. The technological domain spans robotics, motion analysis, navigation systems, and edge computing, with particular emphasis on improving sensor accuracy, reducing computational overhead, and enabling real-time intelligent processing of inertial and environmental sensor data.
The technology disclosed provides the following advantages:
- The invention relates to an advanced inertial measurement unit (IMU) sensor network incorporating integrated artificial intelligence for on-device processing. The system enhances measurement accuracy while reducing computational overhead by leveraging deep learning techniques and real-time edge processing compared with conventional IMU-based solutions.
- The invention employs simulation data gathered under diverse operating conditions to train a neural network model. The training process encompasses learning multiple movement patterns with associated pattern-specific constraints, while during deployment, operational data is continuously acquired and used to adaptively update the neural network based on observed usage patterns.
- The invention provides a processor that performs adaptive processing, dynamically adjusting its operations in response to current conditions and system requirements to balance and optimize measurement accuracy against computational efficiency.
- The invention defines a workflow in which context-guided raw sensor data is acquired and validated against ground-truth reference data to develop the underlying model. Once trained, the model is optimized for efficient execution on resource-constrained single-board MCU or CPU platforms and integrated with inertial measurement unit (IMU) and navigation-processing modules.
- The invention directly processes raw sensor voltage outputs to derive high-precision position and orientation estimates while substantially reducing computational overhead. This voltage-level processing constitutes a marked deviation from conventional pipelines that rely on factory calibration constants, lookup tables, and extensive manufacturer-supplied correction data. The architecture employs only selective signal conditioning as a pre-processing stage, preserving the central innovation of end-to-end processing from raw voltage measurements.
- The invention employs a three-axis gyroscope subsystem configured to measure angular velocity around the X, Y, and Z axes of each sensor node. The gyroscope provides high-precision rotational rate measurements over a wide dynamic range, enabling accurate characterization of complex rotational motions.
- The invention comprises continuous monitoring of the system’s kinematic parameters, automatic detection of operating conditions based on the monitored data, and initiation of appropriate response mechanisms when predefined conditions are met.
- The invention provides a processing device that includes a Teensy 4.1 microcontroller that is designed to run AI models. This MCU allows for a balance of processing power, power consumption, and form factor. The MCU implements power management techniques to maximize battery life while maintaining processing performance.
- The invention defines a training pipeline for the deep neural network that leverages real-world derived simulation datasets acquired under diverse environmental conditions. Specialized test rigs reproduce the full range of joint articulations and orientations in three-dimensional space while concurrently capturing ground-truth positional information, thereby generating paired training data.
- The invention incorporates a feedback subsystem that completes the overall architecture by providing real-time monitoring of model outputs. This mechanism continuously evaluates prediction accuracy, detects discrepancies, and initiates alerts or corrective actions as needed. Operational data is feedback into the learning pipeline to enable adaptive refinement and iterative model updates, thereby supporting ongoing enhancement of calibration accuracy over the system’s service life.
- The invention delivers a substantial advancement in inertial measurement unit technology by tightly integrating AI-based processing with efficient edge computing. This architecture enables more accurate and reliable motion tracking while markedly reducing reliance on external reference or sensing systems.
- The invention evaluates long-distance navigation performance using inertial measurement unit data exclusively. This testing regime verifies the system’s ability to constrain drift over prolonged operation and demonstrates that it can maintain accurate positioning in the absence of any external reference signals.
- The invention provides multiple sensors for measuring environmental conditions such as temperature, structural strain (via strain gauges), magnetic fields, or other relevant parameters that could affect IMU performance. The temperature sensors are strategically placed to monitor both ambient and component-level temperatures, enabling real-time compensation for thermal effects.
Please contact Justin Ehrlickman via email at justin@ipapproach.com or phone at 845-558-7901 to receive a Brokerage Marketing Package.
"Artificial Intelligence Brain"
IP Approach is pleased to present the exclusive patent for sale “Artificial intelligence brain” which includes U.S. Granted Patent US 10,956,809 B1 and its foreign counterparts SG 10,201,910,949 PA; and WO 2,021,101,439 A1 assigned to Wang Lian. The IP relates to a device, system and/or method to produce an artificial intelligence brain.
The technology disclosed provides the following advantages:
- The invention transforms expression outputs into human-readable formats, either written or spoken. Written language is represented as an image generated by an algorithm, while spoken language is sound generated by an algorithm, independent of the specific language.
- The invention remotely monitors multiple locations using sensors, such as image and sound sensors, allowing the AI to see, hear, and perceive activities at various points.
- The invention uses deep learning to process image and voice data for speech recognition, segment images, and identify objects like faces, noses, animals, or trees. It processes both still images and video.
- The invention identifies surface conditions near the sensor, including temperature, hardness, flexibility, smoothness, and force applied.
- The invention detects chemical compounds responsible for specific odors in the air and flavors in liquids or solids, such as sweetness, sourness, saltiness, bitterness, and umami.
- The invention processes input from multiple nodes in the preceding layer using shared weights in a many-to-one mapping, reducing parameters per node. This functions as applying filters to capture local information.
- And More!
Please contact Justin Ehrlickman via email at justin@ipapproach.com or phone at 845-558-7901 to receive a Brokerage Marketing Package.
IPApproach has patents for sale in the following categories:
Advertising
Artificial Intelligence (AI)
Automotive & Vehicles & Trailers
Biometrics & Touch Screen
Construction
Consumer - Electrical
Consumer - Other
Data Management
Delivery & Routes
Display and Imaging
Energy
Gaming
Internet of Things (IOT)
Lighting
Medical
Mobile & Telecommunications
Office Furniture / Equipment
Other
Security
Semiconductor and Packaging
Social Media
Software
Tools & Brackets
Trademark
Video
Wireless
Other - TransactionsIP