Every manufacturing line has the same problem: human inspectors miss things. Not because they're careless, but because the task is fundamentally inhuman — staring at thousands of identical parts per hour, looking for sub-millimeter deviations.

We built a system that doesn't blink.

The Problem

A European automotive supplier was running three-shift manual inspection on their stamping line. Defect escape rate hovered around 2.3%, which sounds small until you calculate the cost of a recall. Each escaped defect that reached the OEM triggered penalty fees, rework costs, and relationship damage that compounded quarter over quarter.

They'd tried rule-based vision systems before. The results were underwhelming — too many false positives made operators ignore the alerts entirely.

Our Approach

We deployed a multi-stage computer vision pipeline using Cognity's inference engine:

  • Stage 1: High-speed screening — A lightweight model running on edge hardware classifies parts as pass/review/fail at line speed (1,200 parts/hour). Latency under 15ms.
  • Stage 2: Detailed analysis — Parts flagged for review get routed to a secondary station where a higher-resolution model examines surface geometry, coating uniformity, and dimensional tolerance.
  • Stage 3: Continuous learning — Every operator override feeds back into the training pipeline. The system gets better every week.

The Architecture

The edge nodes run quantized models on NVIDIA Jetson hardware. Raw image data streams to our on-premise Cognity instance for training and analytics, but inference happens entirely at the edge — no cloud dependency, no latency spikes.

Camera Array → Edge Node (inference) → PLC Integration → Line Control
                    ↓
              Cognity Hub (training, analytics, model updates)

We integrated directly with their existing PLC infrastructure via OPC-UA, which meant zero changes to line control logic. The vision system acts as an advisor — it flags, but the PLC makes the stop/go decision based on configurable thresholds.

Results

After 90 days in production:

  • Defect escape rate: 2.3% → 0.14%
  • False positive rate: Under 0.5% (down from 12% with the previous rule-based system)
  • Inspection throughput: 3x increase — one operator now covers three lines
  • ROI payback: 4.2 months

The most significant metric wasn't in the dashboard. It was the change in operator behavior. Instead of ignoring alerts, they started trusting the system. When it flagged something, they investigated. That trust loop is what makes the difference between a technology demo and a production system.

What We Learned

Three things surprised us:

  1. Lighting matters more than model architecture. We spent more time on LED array design and diffuser placement than on neural network tuning. Consistent illumination eliminated 60% of false positives before we touched the model.

  2. Operators are the best labelers. Their domain expertise produced training data that was orders of magnitude better than crowdsourced annotation. We built a simple labeling interface directly into the inspection station.

  3. The hard part is integration, not AI. Getting the vision system to talk to a 15-year-old PLC over OPC-UA with deterministic timing requirements was the actual engineering challenge. The model training was the easy part.

Looking Forward

We're now extending this system to cover weld inspection and paint defect detection. The architecture is the same — edge inference, centralized training, PLC integration. The models are different, but the pipeline is proven.

If your production line still relies on human visual inspection for quality control, the technology to change that is mature and deployable. The question isn't whether AI vision works in manufacturing. It does. The question is how fast you can integrate it into your existing infrastructure without disrupting production.

That's the problem we solve.