OCR (optical character recognition) and OMR (optical mark recognition) are specialized systems that convert images on a paper to a format that is easily readable and processed by a computer. Both OCR and OMR technologies are comprised of hardware and software components. They function by reading images by scanner, which recognizes and deciphers them into an electronic form. The first basic OMR sensor was created by IBM in the 1930s. Everett Franklin Lindquist made and patented a successful OMR scanner in 1962. IBM developed an OMR test scoring machine and commercialized it in 1962. In 1972, Scantron Corporation made and sold OMR scanning equipment to schools to standardize the testing procedure. OMR technology is used for a variety of applications, including the processing of educational tests, ballots, questionnaires, reports and order sheets/forms. The most commonly used application of OMR is the pencil bubble test, where students mark answers by using a pencil to darken a bubble on a preprinted sheet. The OMR software scans the document and reads the marks to automatically grade it. OCR readers can input large volumes of data into a digitalized form, which can be manipulated by a word processor. OCR systems generate fewer errors compared with manual data entry, therefore saving valuable time and costs that would otherwise be spent in purchasing error correction workstations. An OCR system is able to read 420 characters per second, with an accuracy rate of 98 percent. The accuracy of an OCR system depends on the readability or the original source. OCR systems are expensive, costly to maintain and require manual intervention. Writer Bio

What Is the Difference Between an OCR and an OMR  - 97