Why and when to use 3D sensors vs. Machine Vision?

Of the billions of products manufactured every day, few could be made without any human interaction. Digital work instructions are a medium to guide operators through each step of the process providing in context information like pictures, video’s.. However it has its limit in terms of execution reliability, safety and accuracy. Despite having good intentions, making mistakes is inherently part of what makes us human. Some mistakes – although not intended – can have a serious impact on product quality. Hence, humans deserve a second eye, especially when a manufacturing process is critical or prone to errors.

 

Where digital work instructions facilitate to do the right things, is where supervised work instructions empower operators to do the things right. It also helps with automating process confirmation, whereby operators are automatically guided through the process steps. While doing research, there exists different technologies, like 3D sensors or even machine vision systems with built-in AI capability. But which type is most suitable for which application? And what are the benefits?

Supervised work instructions

3D sensor

What?

3D sensors measure the distance between the sensor and the nearest surface point by point using the time-to-flight principle. In fact, it gives you a measured height of each pixel within target frame.

Use cases – Operator Guidance

  • Hand gesture confirmation: A 3D sensor is perfect solution for recognizing hand gestures or when simple operator guidance is required
  • Virtual confirmation: Unlike following operator handlings, 3D sensors give the possibility to define fixed virtual confirmation buttons as a means to proceed during the process
  • Picking confirmation: guide operators to the right location and simplify picking process and execution confirmation

Advantages

Trade-Offs

  • Hands-free: having both hands available is a requirement for many scenarios. 3D sensors enable you to run through the process hands-free as you automatically go to the next step when the sensor detects a hand waved over a pre-defined area.
  • Save time: manually confirming each and every action can sometimes be inefficient, especially when the confirmation button is not within natural reach of the operator.
  • No inspection capabilities
  • Less accurate
  • Not self-learning
  • No position control or automatic adjustment

Machine Vision

where 3D sensors help to confirm process steps, is where vision goes one step further and is able to validate operator actions or inspect products (different shapes or colors, irregularities, strains,.. )

A practical example, imagine you are about to exercise at home using a fitness app. The app will guide you through your workout, but the app won’t warn in case of a wrong posture nor would it generate any alert necessarily to correct you. Unlike a real fitness coach, It simply doesn’t have the capability to validate the correctness of execution. However, tracking technologies do exist and have found their way into manufacturing applications one of those is called “machine vision”.

What?

A vision system with Built-in AI is able to solve various problems conventional sensors and smart cameras have trouble with, including ambient light, individual differences of products, and changes in the positions of parts. The built-in AI, specially designed for presence/absence differentiation and is capable of detecting differences between acceptable and unacceptable products or handlings.

 

Use cases – Operator Guidance

  • Validate operator handlings Operator handlings are supervised through machine algorithms. It checks both: progress and correctness. Warnings will be provided in case of wrong action taking.
  • Inspect products (shape, color, irregularities, strains,…) to the smallest level of detail. Support operators in assessing parts or components.
  • Deal with multiple (similar) variants: a vision system can easily identify the right variant component product and makes sure the right parts are picked and assembled.
  • Flexible manufacturing: if the position of the workpiece relative to the sensor changes, the system will automatically change the position of the required check.
  • Kitting: at one glance, a vision system as it is able to inspect a multitude of elements per target
  • Automate annoying, repetitive tasks for the operator. Eliminate operator variance and fatigue from the equation by automating more dangerous, repetitive tasks so the operator can concentrate on more value-added work.

Advantages

Trade-Offs

  • Increase precision and accuracy: the accuracy of operations is improved as manual actions are monitored by computer algorithms.
  • Self-learning: System is taught what is “ok” and “not ok” through pixel recognition. The advanced range even includes sophisticated gauge and measurement algorithms
  • Reveal the invisible: not everything can be seen with the naked eye. Vision can help bringing this kind of information to the surface with good/no good assessment.
  • Detect errors at the source rather than downstream: The further a bad part makes it down the assembly stream, the more it costs to be removed. Vision can detect errors at source and prevent flaws in base materials or malfunctions in components
  • Position adjustment: unlike classic 3D sensor technology, vision systems can overcome imprecise part position and automatically adjusts when parts have been shifte
  • Overkill when basic hand gesture tracking is enough
  • Full vision system requires more expertise to have it up and running

With all this in mind, it is important to understand the application to define which technology is a good fit.

Ansomatic is a software platform that accommodates the different sensor and camera technologies.

Want to learn more, plan an informal demo