Machine Vision and Deep Learning Resources

Mariner builds and deploys production-grade deep-learning systems for manufacturing applications. We believe that machine learning and Deep Learning are poised to transform the manufacturing industry, delivering better quality, higher uptime, lower operating costs, and reduced environmental impact – all promises of Industry 4.0.

To aid this industry transformation, we’re providing this collection of free resources to help the manufacturing sector better understanding machine vision, Deep Learning, and more. Please don’t hesitate to contact us with any questions.

ebooks

ebook
What’s Inside

This eBook provides the fundamentals you’ll need to understand both machine vision and Deep Learning, as well as how we use those elements in our Spyglass Visual Inspection system.

eBook
What’s Inside

We’re Microsoft’s 2020 Global Partner of the Year for IoT, and in this Microsoft-produced eBook they tell you why. Download it for a quick read on defects in manufacturing, how SVI solves them with quick time-to-value, and some real-world numbers on the savings that our manufacturing customers have realized.

fact sheets

Fact sheet
What’s Inside

Microsoft and Nvidia love SVI so much they produced this fact sheet for us on how it works on the painted, glossy, and textured surfaces that other machine vision systems can’t handle.

Infographic
What’s Inside

Want to pick up some easily-learned facts about machine vision, Deep Learning, and Spyglass Visual Inspection? Microsoft created this infographic exactly for that purpose. Get it right here and check out how model training, real-time AI detection and classification of defects, and monitoring / root-cause analysis all work together in Spyglass Visual Inspection to truly bring transformation to the manufacturing process.

 
Fact Sheet
What’s Inside

Want to pick up some easily-learned facts about machine vision, Deep Learning, and Spyglass Visual Inspection? Microsoft created this infographic exactly for that purpose. Get it right here and check out how model training, real-time AI detection and classification of defects, and monitoring / root-cause analysis all work together in Spyglass Visual Inspection to truly bring transformation to the manufacturing process.

Webinars and Videos

webinar
What’s Inside

Learn About:

  • Why HMLV manufacturers struggle with proper assembly
  • How Spyglass Assembly Verification eliminates those struggles
  • Benefits of Spyglass Assembly Verification
Webinar
What’s Inside

Learn about:

  • The state of IoT in the automotive industry
  • Shortcomings of traditional machine vision systems
  • Using Deep Learning AI to better identify defects and eliminate false positives
  • How customers are benefiting from Spyglass Visual Inspection
Webinar
What’s Inside

COVID-19 has accelerated the adoption of digital technology and transformed businesses forever. Hear four expert panelists discuss strategies and business practices on how to stay competitive in this new business and economic environment.

Webinar
What’s Inside

Learn about:

  • The cost of quality
  • Why AI has so far disappointed many manufacturers
  • How Deep Learning can achieve upwards of 30X improvements over traditional machine vision systems
Webinar
What’s Inside

Learn About:

  • Cloud Limitations
  • Latest on the edge, hybrid edge/cloud setups for Industry 4.0
  • How Intel and Microsoft technologies can help make it all come together.
Webinar
What’s Inside

Learn About: 

  • Why the Cloud has limitations for AI and Deep Learning on the factory floor.
  • Why on-premise is fashionable again; now they call it “edge computing.”
  • Why factory-floor AI and Deep Learning need both a hybrid edge/cloud to truly deliver 4.0 Smart Factory capabilities.
Webinar
What’s Inside

Sponsored by Conexus as part of their Emerging Technology Showcase series highlights how Mariner has applied deep learning to the problem of false rejects in machine vision applications.

Webinar
What’s Inside

Learn About:

  • Real-life success stories involving the company’s GPU-powered deep learning software for quality inspection.
  • How SVI can dramatically improve visual inspection accuracy, deliver a 30X reduction in false rejections, and make considerable process improvements that drive significant business value.
Video
What’s Inside

Want a fast, informative primer? This video will get you on track.

White papers

White Paper
What’s Inside

Want to pick up some easily-learned facts about machine vision, Deep Learning, and Spyglass Visual Inspection? Microsoft created this infographic exactly for that purpose. Get it right here and check out how model training, real-time AI detection and classification of defects, and monitoring / root-cause analysis all work together in Spyglass Visual Inspection to truly bring transformation to the manufacturing process.

frequently asked questions

Questions and answers on machine vision, AI, defect detection, Spyglass Visual Inspection, and more

frequently asked questions

Q
Does Spyglass Visual Inspection work with existing machine vision systems?
A

Yes. Spyglass Visual Inspection is camera-agnostic, meaning it will work with any camera. The only caveat is that the defects you wish to detect MUST be visible in the images from your camera(s) and identifiable to your Quality experts when they look at those images -- but other than that, SVI's AI model won't be bothered by the make, model, or type of camera.

Q
How big does a defect need to be before Spyglass Visual Inspection can detect it?
A

It's not really the size of the defect in the real world that's important to SVI, but rather the size of the defect on the IMAGE at which SVI is looking. That is, we need defects to occupy a mininum of around 7 pixels of an image to be able to work with it -- but what constitutes 7 pixels will vary widely based on camera and camera setup. For example, some camera systems might struggle to take an image where a 1mm defect occupies 7 pixels, whereas a microscopic camera might be able to take images wherein a 1 micron defect would take up far more than 7 pixels. If you're interested in more info on this, we have a great explainer post on why defect size doesn't matter.

Q
Why does Spyglass Visual Inspection use Deep Learning?
A

Deep Learning is currently the best technology that the industry has to solve what we like to call "fuzzy" problems.

Now, traditional machine vision systems do indeed work very well on most discrete problems -- that is, if what you're asking your system to decide is whether there's a hole present or NOT a hole present, or whether a piece is 10mm long or NOT 10mm long, you probably won't need Deep Learning. 

But what about cases where defects look very much like non-defects, such as a piece of fabric that might have a stain or might have a stain? Traditional machine vision systems will struggle with those kinds of problems, but with Deep Learning we're able to train the AI to tell the difference between the two just like you could train a human to tell the difference. 

That's the power of Deep Learning, and that's why it's the heart of SVI -- it dramatically outperforms existing vision systems on fuzzy problems such as (but not limited to, of course) our stain vs. lint example.

 

Q
What equipment will I need to purchase for Spyglass Visual Inspection?
A

Most likely, none. SVI comes installed on a server box that already has the processors, GPUs, and other hardware and software it needs to do its job. You will need, however, to have a machine vision system already in place -- without that, our data scientists will have no way to get the images of your defects that they will need in order to train SVI's AI for you.

Q
How many defect images do I need to get started?
A

We need at least 100 images of each type of defect in order to train the AI properly. Any less than that means that the AI will not have enough information to appropriately generalize its understanding of what your defects look like. Also note that although 80% or so of your images will be used to train the AI, the other 20% are held back from training so that once the AI model is built we can test it on images it's never seen before.

 

Q
Can I just use the onboard software that comes with the camera?
A

It depends on your product, your production lines, and your needs.

The onboard processing is typically how systems are architected when Deep Learning AI is not required. Embedded architectures will always be faster, and there’s no need to move images, and that’s why they are used in high-volume, low-consequence manufacturing situations like checking whether a "10% Off" label got onto a package of snacks. Systems like that will solve many manufacturers’ problems WITHOUT Deep Learning at all as long as they are only binary decisions (“Is this particular feature here or not?”), because traditional programming is great at solving binary problems.

The value prop of SVI, though, is that because of its Deep Learning and its architecture, it can handle much harder, non-binary problems than those that can be solved onboard a camera. For example, Sage Automotive, a global automobile fabric manufacturer, had a traditional system they were using that could only say yes, there’s a blob on the fabric, but it had no idea if that blob was a stain (defect) or a piece of lint (not a defect).

Not every manufacturing scenario will have that kind of non-binary use case – but when they do it’s usually a multi-million-dollar problem, because they can’t use automation to solve it and thus have to slow down their systems and deploy an army of humans to stand there and physically look at it. 

Q
How many defects can SVI detect in one image - max?
A

The AI model can be trained to know as many defect classes as you have images for. We have one customer using a model which we trained to identify 12 different defects on one product.

Q
How many product profiles can I have?
A

Not really limited. If the defects appear different across different products then we’d have to train a different AI model for each product, and in SVI you would then select the appropriate model for the product you were running at that time. If the defects appear the same across different products (for example, a scratch might look the same across all of your products), then you wouldn’t need multiple models – one model could be used across those products.

Q
Can I use multiple photos for each product to keep the amount of details higher in each photo?
A

The camera integrator will work with you to get a system in place that’s right for your production lines. Sometimes that means multiple cameras, which SVI can handle, but oftentimes with the right camera in place a manufacturer will only need one camera.

Q
I only have photos of good examples of my products. Can I use SVI and then categorize the defects that are found to train it along the way?
A

Images of defects are needed to do the initial AI model training. The typical process is that once the camera system is in place in production you will use it to accumulate images that show defects until you have acquired enough for us to do the initial model training.

Q
How many photos do I need to provide for how a product should look like?
A

The AI works by being trained on specific defects, and those are what it looks for. It does that in order to avoid false identification of things that are not actual defects, which is what often happens when one tries to train AI to do anomaly detection by having it look for things that are not like a “perfect” reference image. As an example, suppose you train an AI on what a perfect watch face looks like. If the AI then sees a crack, it will know that the crack doesn’t match the reference image and will appropriately call it a defect. But then suppose a watch face goes by that just has a hair on it, or some dirt – because those anomalies don’t match the perfect reference image, the AI will incorrectly call those defective. But if the AI is trained to look for scratches, it will still properly identify the scratched watch face as defective but allow the watch face with the hair or dirt on it to properly go through as non-defective.

Q
While authenticating my products, some of the parts in the photo might not be orientated the right way. Does it matter for the detection of the part?
A

If you were to try to build an AI model using deviation from a “perfect” image, then parts that were oriented differently would indeed appear as defective just because they didn’t match the reference. However, if the AI is instead trained on specific defect classes, then the orientation of the part will likely not matter at all – a scratch will always look like a scratch.

frequently asked questions - General

Q
Mariner seems like a great company. Are you currently hiring?
A

Mariner is indeed a great company -- thanks for noticing! For more information about what positions we have open or might be adding, please use our Contact Form.

Q
Do you use Azure, AWS, and/or Google for the Cloud-based portion of your services?
A

All of our solutions are standardized for or built on or in Azure. We standardized on Azure because it offers everything our solutions need for the portions that run in the Cloud, while also being simple and secure. If your organization does not currently have its own Azure tenant, no worries -- we can easily make it work in ours.

Powerful Tools for Manufacturers to Reduce Their Cost of Quality

Mariner uses Deep Learning AI to put powerful, simple, and effective detection technology in your hands.

Spyglass Visual Inspection
Eliminates false rejects and pseudo-defects and makes machine vision inspection systems perform like manufacturers thought they would when they purchased them.
Spyglass Assembly verification

Ingests BoMs, engineering drawings, and parts specs to generate an understanding of what a complicated assembly should look like – and then watches the article on the line to make sure that assembly is correct at each station.

GET ON THE PATH TO
REDUCING THE COST OF YOUR
OPERATIONS QUALITY TODAY