Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6463416B1
Filed: 1996-07-15
Issued: 2002-10-08
Patent Holder: (Original Assignee) Intelli Check Inc     (Current Assignee) Intellicheck Mobilisa Inc
Inventor(s): Kevin M. Messina

Title: Authentication system for identification documents

[FEATURE ID: 1] method, substratedevice, mechanism, system, document, machine, computer, technique[FEATURE ID: 1] programmable apparatus, apparatus, means, programs, method
[TRANSITIVE ID: 2] producing, generating, compositingproviding, representing, receiving, using, displaying, processing, combining[TRANSITIVE ID: 2] authenticating, comprising, reading
[FEATURE ID: 3] composite machine, readable document, halftone cells, readable image, glyphs, human invisible print materials, readable charactercharacters, visible, video, machine, data, image, patterns[FEATURE ID: 3] entity, human recognizable information, machine recognizable, information, error messages, text, graphics, National Television Standards
[FEATURE ID: 4] humaninformation, data, document, indicia, video, text, image[FEATURE ID: 4] jurisdictional segments, alarm messages, identification document
[TRANSITIVE ID: 5] comprisingincluding, comprises, having, of[TRANSITIVE ID: 5] license format matches
[FEATURE ID: 6] background image, graphical image, spatial pointer, first document, composite documenttemplate, code, watermark, signature, substrate, reference, character[FEATURE ID: 6] document, reference license format, preselected criterion, message
[TRANSITIVE ID: 7] saidthe, this, such, which[TRANSITIVE ID: 7] said
[FEATURE ID: 8] glyphtone cells, grayscale image data values, adjacent visible halftone cells, supplementary information, invisible print materials, additional computer data, color parametersinformation, data, indicia, pixels, symbols, content, text[FEATURE ID: 8] identification information, reference jurisdictional segments, identification parameter
[FEATURE ID: 9] distinguishable patternsdimensions, sizes, characteristics, attributes, states[FEATURE ID: 9] values
[FEATURE ID: 10] second image such, portion, digital encoding, location identifier, point, versionrepresentation, feature, segment, value, section, pattern, location[FEATURE ID: 10] license format
[FEATURE ID: 11] second image, areaimage, address, attribute, information, item, article, event[FEATURE ID: 11] identification criteria
[FEATURE ID: 12] claimclaimed, item, clause, paragraph, embodiment, step, the claim[FEATURE ID: 12] claim
[TRANSITIVE ID: 13] comprisesdefines, represents, indicates, utilizes, uses, contains, incorporates[TRANSITIVE ID: 13] embodies, includes
[FEATURE ID: 14] font identifiercommand, signature, code[FEATURE ID: 14] verification signal
[FEATURE ID: 15] second documentcomparison, memory, control[FEATURE ID: 15] processing
[FEATURE ID: 16] memorymedium, scanner, reader[FEATURE ID: 16] means
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable and human [FEATURE ID: 4]

- readable document [FEATURE ID: 3]

comprising [TRANSITIVE ID: 5]

: generating [TRANSITIVE ID: 2]

a background image [FEATURE ID: 6]

on a substrate [FEATURE ID: 1]

, said [TRANSITIVE ID: 7]

background image comprising coded glyphtone cells [FEATURE ID: 8]

based on grayscale image data values [FEATURE ID: 8]

, each of said halftone cells [FEATURE ID: 3]

comprising one of at least two distinguishable patterns [FEATURE ID: 9]

; compositing [TRANSITIVE ID: 2]

the background image with a second image such [FEATURE ID: 10]

that two or more adjacent visible halftone cells [FEATURE ID: 8]

may be decoded and the second image [FEATURE ID: 11]

may be viewed . 2 . The method of claim [FEATURE ID: 12]

1 , wherein the second image comprises [TRANSITIVE ID: 13]

a human - readable image [FEATURE ID: 3]

. 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 6]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 10]

of the background image is printed using glyphs [FEATURE ID: 3]

. 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials [FEATURE ID: 3]

. 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 10]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 6]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 10]

and supplementary information [FEATURE ID: 8]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 10]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 11]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 8]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 3]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 14]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 8]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 8]

. 17 . A method for comparing a first document [FEATURE ID: 6]

to a second document [FEATURE ID: 15]

, comprising : inputting a composite document [FEATURE ID: 6]

into a memory [FEATURE ID: 16]

, said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 10]

1 . A programmable apparatus [FEATURE ID: 1]

for authenticating [TRANSITIVE ID: 2]

a document [FEATURE ID: 6]

which embodies [TRANSITIVE ID: 13]

identification information [FEATURE ID: 8]

for an identified entity [FEATURE ID: 3]

comprising [TRANSITIVE ID: 2]

both human recognizable information [FEATURE ID: 3]

and machine recognizable [FEATURE ID: 3]

coded information [FEATURE ID: 3]

, said [TRANSITIVE ID: 7]

apparatus [FEATURE ID: 1]

comprising : means [TRANSITIVE ID: 16]

for reading [TRANSITIVE ID: 2]

the information of said document into said programmable apparatus ; means for determining whether said document includes [TRANSITIVE ID: 13]

a license format [FEATURE ID: 10]

corresponding to a reference license format [FEATURE ID: 6]

based on a comparison between said read information and said reference license format ; means for parsing said read information into jurisdictional segments [FEATURE ID: 4]

if said license format matches [FEATURE ID: 5]

said reference license format , wherein reference jurisdictional segments [FEATURE ID: 8]

as included in said reference license format each have predetermined values [FEATURE ID: 9]

; processing [FEATURE ID: 15]

means directing the operation of said programmable apparatus for comparing said read information to determine whether said jurisdictional segments match said predetermined values ; said processing means further directing the operation of said programmable apparatus for determining whether a selected identification parameter [FEATURE ID: 8]

for said identified entity corresponds to a preselected criterion [FEATURE ID: 6]

and generating at least a verification signal [FEATURE ID: 14]

if said selected identification parameter satisfies said preselected criterion ; and means for indicating a verification signal . 2 . The programmable apparatus of claim [FEATURE ID: 12]

1 wherein said means [FEATURE ID: 1]

for indicating a verification signal is manifested as a display means selected from the group consisting of : means for displaying read information from a license format , means for displaying alarm messages [FEATURE ID: 4]

, means for displaying error messages [FEATURE ID: 3]

, and means for displaying a “ yes ” or “ no ” message [FEATURE ID: 6]

. 3 . The programmable apparatus of claim 1 , wherein said means for indicating a verification signal is capable of providing human recognizable information in text [FEATURE ID: 3]

and graphics [FEATURE ID: 3]

, said text and graphics being capable of utilizing programs [FEATURE ID: 1]

including the Super Video Graphics Array , and National Television Standards [FEATURE ID: 3]

. 4 . A method [FEATURE ID: 1]

for authentication of an identification criteria [FEATURE ID: 11]

in an identification document [FEATURE ID: 4]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6459803B1
Filed: 1992-07-31
Issued: 2002-10-01
Patent Holder: (Original Assignee) Digimarc Corp     (Current Assignee) Digimarc Corp ; Corbis Corp
Inventor(s): Robert D. Powell, Mark J. Nitzberg

Title: Method for encoding auxiliary data within a source signal

[FEATURE ID: 1] method, substrate, graphical image, composite document, versionform, digital, system, technique, procedure, page, process[FEATURE ID: 1] digital document, mathematical relationship, method
[TRANSITIVE ID: 2] producing, generating, compositingdisplaying, providing, representing, defining, forming, indicating, of[TRANSITIVE ID: 2] containing, having
[FEATURE ID: 3] composite machine, readable image, digital encoding, point, font identifier, second documentpattern, symbol, background, portion, bitmap, template, border[FEATURE ID: 3] digital image
[FEATURE ID: 4] readable, based, viewedgenerated, formed, rendered, printed, produced, determined, made[FEATURE ID: 4] defined, selected
[FEATURE ID: 5] humaninformation, image, location, array, ink, indicia, document[FEATURE ID: 5] indicia image, indicia location
[TRANSITIVE ID: 6] comprisingincluding, containing, having, of, wherein, with, by[TRANSITIVE ID: 6] comprising, consisting
[FEATURE ID: 7] background imagebackground, substrate, template, target, artwork, article, image[FEATURE ID: 7] original document, original image, predetermined difference range
[FEATURE ID: 8] glyphtone cells, distinguishable patterns, adjacent visible halftone cells, human invisible print materials, supplementary information, invisible print materials, additional computer data, color parametersdata, colors, values, patterns, samples, regions, bits[FEATURE ID: 8] adjacent pixels, pixel, pixels, respective pixels, signature points, locations, pixel values
[FEATURE ID: 9] grayscale image data values, portioncolor, data, magnitude, pixel, brightness, feature, first[FEATURE ID: 9] pixel value, luminance, first pixel value, average pixel value
[FEATURE ID: 10] halftone cellsfeatures, components, sets, elements[FEATURE ID: 10] steps
[FEATURE ID: 11] second image such, first imagefirst, image, second, selected, pixel, corresponding, such[FEATURE ID: 11] predetermined, second pixel value, first pixel
[FEATURE ID: 12] second image, readable characterobject, output, article, identifier, overlay, text, interface[FEATURE ID: 12] indicia, output image, result
[FEATURE ID: 13] claimfeature, need, paragraph, claim of, item, figure, clause[FEATURE ID: 13] claim, location
[FEATURE ID: 14] spatial pointerimage, overlay, area[FEATURE ID: 14] output array
[FEATURE ID: 15] location identifiervalue, reference, distance[FEATURE ID: 15] relation
[FEATURE ID: 16] areaimage, extent, interval, order, orientation, average, element[FEATURE ID: 16] array, amount
[FEATURE ID: 17] first documentfirst, second, third[FEATURE ID: 17] second adjacent pixel
[FEATURE ID: 18] steppreliminary step, further step, preceding step, added step, method step, sub step, optional step[FEATURE ID: 18] step
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable [FEATURE ID: 4]

and human [FEATURE ID: 5]

- readable document comprising [TRANSITIVE ID: 6]

: generating [TRANSITIVE ID: 2]

a background image [FEATURE ID: 7]

on a substrate [FEATURE ID: 1]

, said background image comprising coded glyphtone cells [FEATURE ID: 8]

based [TRANSITIVE ID: 4]

on grayscale image data values [FEATURE ID: 9]

, each of said halftone cells [FEATURE ID: 10]

comprising one of at least two distinguishable patterns [FEATURE ID: 8]

; compositing [TRANSITIVE ID: 2]

the background image with a second image such [FEATURE ID: 11]

that two or more adjacent visible halftone cells [FEATURE ID: 8]

may be decoded and the second image [FEATURE ID: 12]

may be viewed [TRANSITIVE ID: 4]

. 2 . The method of claim [FEATURE ID: 13]

1 , wherein the second image comprises a human - readable image [FEATURE ID: 3]

. 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 1]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 9]

of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials [FEATURE ID: 8]

. 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 3]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 14]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 15]

and supplementary information [FEATURE ID: 8]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 3]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 16]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 8]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 12]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 3]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 8]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 8]

. 17 . A method for comparing a first document [FEATURE ID: 17]

to a second document [FEATURE ID: 3]

, comprising : inputting a composite document [FEATURE ID: 1]

into a memory , said composite document comprised of a first image [FEATURE ID: 11]

overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 1]

of the second image . 18 . The method of claim 17 , further comprising the step [FEATURE ID: 18]

1 . A digital document [FEATURE ID: 1]

containing [TRANSITIVE ID: 2]

an indicia [FEATURE ID: 12]

applied to an original document [FEATURE ID: 7]

, comprising [TRANSITIVE ID: 6]

: a digital image [FEATURE ID: 3]

defined [TRANSITIVE ID: 4]

by an array [FEATURE ID: 16]

of adjacent pixels [FEATURE ID: 8]

, each pixel [FEATURE ID: 8]

having [TRANSITIVE ID: 2]

a pixel value [FEATURE ID: 9]

selected [TRANSITIVE ID: 4]

from the group consisting [TRANSITIVE ID: 6]

of luminance [FEATURE ID: 9]

and a color value , a first pixel value [FEATURE ID: 9]

of a first predetermined [TRANSITIVE ID: 11]

pixel having a relation [FEATURE ID: 15]

to a second pixel value [FEATURE ID: 11]

of a second adjacent pixel [FEATURE ID: 17]

, wherein said first pixel [FEATURE ID: 11]

and said second adjacent pixel being positioned to define said indicia image [FEATURE ID: 5]

, and said relation is a mathematical relationship [FEATURE ID: 1]

used in applying said indicia image to the original image [FEATURE ID: 7]

. 2 . A digital document in accordance with claim [FEATURE ID: 13]

1 wherein said pixel value consists of luminance . 3 . A digital document in accordance with claim 1 wherein said pixel value consists of a color value . 4 . A method [FEATURE ID: 1]

of detecting a location [FEATURE ID: 13]

for an indicia in a digital image formed of an array of pixels [FEATURE ID: 8]

having a pixel value , comprising the steps [FEATURE ID: 10]

of : determining an amount [FEATURE ID: 16]

of the pixel value of respective pixels [FEATURE ID: 8]

in said array of pixels , said pixel value being selected from the group consisting of luminance and a color value ; calculating a difference in average pixel value [FEATURE ID: 9]

between pixels corresponding to signature points [FEATURE ID: 8]

in said array of pixels ; and comparing the difference in average pixel value with a predetermined difference range [FEATURE ID: 7]

to determine those pixels corresponding to signature points . 5 . A method in accordance with claim 4 wherein said pixel value consists of luminance . 6 . A method in accordance with claim 4 wherein said pixel value consists of a color value . 7 . A method in accordance with claim 4 wherein said selected pixels are adjacent pixels . 8 . A method in accordance with claim 7 wherein said pixel value consists of luminance . 9 . A method in accordance with claim 7 wherein said pixel value consists of a color value . 10 . The method of claim 4 , further comprising the step [FEATURE ID: 18]

of : generating an output image [FEATURE ID: 12]

represented by an output array [FEATURE ID: 14]

of pixels , said output array including altered pixels at locations [FEATURE ID: 8]

for which the comparing step yielded a difference that was within the predetermined difference range . 11 . A method for identifying an indicia location [FEATURE ID: 5]

in a digital image formed of an array of pixels having a pixel value , comprising the steps of : calculating a difference of pixel values [FEATURE ID: 8]

for a selected pixel and a plurality of pixels surrounding the selected pixel ; comparing the difference of pixel values with a predetermined difference range ; and categorizing the selected pixel as one of an indicia location and not an indicia location based on a result [FEATURE ID: 12]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6456726B1
Filed: 1999-10-26
Issued: 2002-09-24
Patent Holder: (Original Assignee) Matsushita Electric Industrial Co Ltd     (Current Assignee) Panasonic Holdings Corp
Inventor(s): Hong Heather Yu, Min Wu, Xin Li, Alexander D. Gelman

Title: Methods and apparatus for multi-layer data hiding

[FEATURE ID: 1] methodprocedure, technique, methods, process, step, scheme, mode[FEATURE ID: 1] method, steps, embedding space
[TRANSITIVE ID: 2] producingdefining, generating, providing, constructing, using, composing, determining[TRANSITIVE ID: 2] having, evaluating, selecting
[FEATURE ID: 3] composite machine, human invisible print materialsgraphics, textures, video, data, images[FEATURE ID: 3] authentication data
[FEATURE ID: 4] human, invisible print materialsdata, information, text, ink, user, invisible[FEATURE ID: 4] hidden
[TRANSITIVE ID: 5] comprisingincluding, includes, comprises, by, containing, involving, having[TRANSITIVE ID: 5] comprising
[TRANSITIVE ID: 6] generatingestablishing, applying, obtaining, defining, providing, placing[TRANSITIVE ID: 6] embedding, assessing
[FEATURE ID: 7] background image, spatial pointerwatermark, symbol, signal, image, spectrum, glyph, domain[FEATURE ID: 7] bitstream, base domain embedding
[TRANSITIVE ID: 8] saidthe, this, each, such[TRANSITIVE ID: 8] said
[FEATURE ID: 9] glyphtone cells, grayscale image data values, distinguishable patterns, graphical image, supplementary information, font identifier, additional computer data, color parameters, first document, composite documentinformation, content, text, instructions, parameters, metadata, tags[FEATURE ID: 9] data, primary hidden data, control information, access control data, keys, management rules, synchronization data, decoding data, same domain
[FEATURE ID: 10] halftone cellslevels, features, patterns, graphics, types, images[FEATURE ID: 10] goals, error correction data
[TRANSITIVE ID: 11] compositingencoding, masking, blending[TRANSITIVE ID: 11] spectrum domain embedding
[FEATURE ID: 12] second image such, readable characterrepresentation, form, parameter, image, watermark, feature, property[FEATURE ID: 12] media unit
[FEATURE ID: 13] adjacent visible halftone cells, versionparameters, characteristics, data, information, format, state, content[FEATURE ID: 13] data hiding capacity, different hidden data layers, identification data
[FEATURE ID: 14] second imagedocuments, contents, images, information[FEATURE ID: 14] host data
[FEATURE ID: 15] claimembodiment, item, clause, paragraph, claimed, step, claim of[FEATURE ID: 15] claim, embedding scheme
[FEATURE ID: 16] portion, digital encoding, point, second documentpattern, section, first, representation, subset, segment, background[FEATURE ID: 16] ruling layer
[FEATURE ID: 17] areaenvelope, orientation, interface, environment, angle, array, interval[FEATURE ID: 17] embedding technique
[FEATURE ID: 18] steppreliminary step, further step, preceding step, added step, method step, sub step, optional step[FEATURE ID: 18] step
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable and human [FEATURE ID: 4]

- readable document comprising [TRANSITIVE ID: 5]

: generating [TRANSITIVE ID: 6]

a background image [FEATURE ID: 7]

on a substrate , said [TRANSITIVE ID: 8]

background image comprising coded glyphtone cells [FEATURE ID: 9]

based on grayscale image data values [FEATURE ID: 9]

, each of said halftone cells [FEATURE ID: 10]

comprising one of at least two distinguishable patterns [FEATURE ID: 9]

; compositing [TRANSITIVE ID: 11]

the background image with a second image such [FEATURE ID: 12]

that two or more adjacent visible halftone cells [FEATURE ID: 13]

may be decoded and the second image [FEATURE ID: 14]

may be viewed . 2 . The method of claim [FEATURE ID: 15]

1 , wherein the second image comprises a human - readable image . 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 9]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 16]

of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials [FEATURE ID: 3]

. 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 16]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 7]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier and supplementary information [FEATURE ID: 9]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 16]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 17]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 4]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 12]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 9]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 9]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 9]

. 17 . A method for comparing a first document [FEATURE ID: 9]

to a second document [FEATURE ID: 16]

, comprising : inputting a composite document [FEATURE ID: 9]

into a memory , said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 13]

of the second image . 18 . The method of claim 17 , further comprising the step [FEATURE ID: 18]

1 . A method [FEATURE ID: 1]

of embedding [FEATURE ID: 6]

hidden [TRANSITIVE ID: 4]

data [FEATURE ID: 9]

into host data [FEATURE ID: 14]

having [TRANSITIVE ID: 2]

a media unit [FEATURE ID: 12]

, comprising [TRANSITIVE ID: 5]

the steps [FEATURE ID: 1]

of : evaluating [TRANSITIVE ID: 2]

the media unit of the host data ; assessing [TRANSITIVE ID: 6]

the data hiding capacity [FEATURE ID: 13]

of said [TRANSITIVE ID: 8]

media unit and selecting [TRANSITIVE ID: 2]

at least one embedding space [FEATURE ID: 1]

and at least one embedding algorithm to accommodate multiple data hiding goals [FEATURE ID: 10]

associated with two different hidden data layers [FEATURE ID: 13]

; embedding a ruling layer [FEATURE ID: 16]

of primary hidden data [FEATURE ID: 9]

into the media unit ; and embedding at least one governing layer of secondary hidden data on top of the ruling layer of primary hidden data , such that embedding the primary and secondary hidden data into the host data generates embedded data , wherein the governing layer of secondary hidden data provides control information [FEATURE ID: 9]

for controlling the primary hidden data and the host data . 2 . The method of claim [FEATURE ID: 15]

1 further comprising the step [FEATURE ID: 18]

of mapping the primary hidden data into a bitstream [FEATURE ID: 7]

before embedding into the media unit . 3 . The method of claim 1 wherein the secondary hidden data is selected from the group of : error correction data [FEATURE ID: 10]

, identification data [FEATURE ID: 13]

, access control data [FEATURE ID: 9]

, keys [FEATURE ID: 9]

, management rules [FEATURE ID: 9]

, synchronization data [FEATURE ID: 9]

, decoding data [FEATURE ID: 9]

, and authentication data [FEATURE ID: 3]

. 4 . The method of claim 1 wherein the steps of embedding further comprise employing an embedding scheme [FEATURE ID: 15]

selected from the group of : base domain embedding [FEATURE ID: 7]

and spectrum domain embedding [FEATURE ID: 11]

. 5 . The method of claim 1 further comprising the step of selecting an embedding technique [FEATURE ID: 17]

wherein the secondary hidden data is embedded substantially noninterfering with the primary hidden data . 6 . The method of claim 5 wherein the embedding technique is selected from the group of : substantially noninterfering features extracted from the same domain [FEATURE ID: 9]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6456724B1
Filed: 1998-05-06
Issued: 2002-09-24
Patent Holder: (Original Assignee) NEC Corp     (Current Assignee) NEC Personal Computers Ltd
Inventor(s): Junya Watanabe

Title: Electronic watermarking system capable of providing image data with high secrecy

[TRANSITIVE ID: 1] producing, generatingproviding, rendering, placing, forming, applying, using, displaying[TRANSITIVE ID: 1] inserting, comprising
[FEATURE ID: 2] composite machine, graphical image, first imagedocument, video, image, picture, data, glyph, display[FEATURE ID: 2] original image, output image
[FEATURE ID: 3] human, glyphtone cells, adjacent visible halftone cells, spatial pointer, invisible print materials, readable character, versionimage, information, data, output, original, object, ink[FEATURE ID: 3] electronic watermark, electronic watermarking data, coordinate conversion data, electronic image, electronic watermarked image, original electronic watermarking data
[TRANSITIVE ID: 4] comprisingincluding, containing, having[TRANSITIVE ID: 4] consisting
[FEATURE ID: 5] background image, substrate, portion, digital encoding, pointsurface, section, background, pattern, matrix, page, pixel[FEATURE ID: 5] part, region
[FEATURE ID: 6] grayscale image data valuespixels, data, images[FEATURE ID: 6] secondary blocks
[FEATURE ID: 7] second image such, first documentfirst, second, image, third, data, coordinates, position[FEATURE ID: 7] first coordinate, first inverse coordinate, first inverse coordinate conversion data, second coordinate, coordinate, second inverse coordinate
[FEATURE ID: 8] second imagesecond, output, data[FEATURE ID: 8] second inverse coordinate conversion data
[FEATURE ID: 9] claimembodiment, figure, paragraph, item, claimed, step, requirement[FEATURE ID: 9] claim
[FEATURE ID: 10] areaaperture, array, address, interval[FEATURE ID: 10] output
[FEATURE ID: 11] additional computer datamaterial, image, information[FEATURE ID: 11] data
[FEATURE ID: 12] memorycomputer, program, terminal, server, machine[FEATURE ID: 12] device
1 . A method of producing [TRANSITIVE ID: 1]

a composite machine [FEATURE ID: 2]

- readable and human [FEATURE ID: 3]

- readable document comprising [TRANSITIVE ID: 4]

: generating [TRANSITIVE ID: 1]

a background image [FEATURE ID: 5]

on a substrate [FEATURE ID: 5]

, said background image comprising coded glyphtone cells [FEATURE ID: 3]

based on grayscale image data values [FEATURE ID: 6]

, each of said halftone cells comprising one of at least two distinguishable patterns ; compositing the background image with a second image such [FEATURE ID: 7]

that two or more adjacent visible halftone cells [FEATURE ID: 3]

may be decoded and the second image [FEATURE ID: 8]

may be viewed . 2 . The method of claim [FEATURE ID: 9]

1 , wherein the second image comprises a human - readable image . 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 2]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 5]

of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials . 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 5]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 3]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier and supplementary information . 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 5]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 10]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 3]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 3]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier . 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 11]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters . 17 . A method for comparing a first document [FEATURE ID: 7]

to a second document , comprising : inputting a composite document into a memory [FEATURE ID: 12]

, said composite document comprised of a first image [FEATURE ID: 2]

overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 3]

1 . An electronic watermarking device which is for use in inserting [TRANSITIVE ID: 1]

an electronic watermark [FEATURE ID: 3]

into an original image [FEATURE ID: 2]

consisting [TRANSITIVE ID: 4]

of a plurality of primary blocks , comprising [TRANSITIVE ID: 1]

: first discrete cosine transform means for carrying out first discrete cosine transform of at least one of said a plurality of primary blocks ; inserting means for inserting electronic watermarking data [FEATURE ID: 3]

into an output [FEATURE ID: 10]

of said first discrete cosine transform means ; inverse discrete cosine transform means for carrying out inverse discrete cosine transform of an output of said inserting means ; and first coordinate [FEATURE ID: 7]

converting means which have first coordinate conversion data and which carry out first coordinate conversion of an output of said inverse discrete cosine transform means by said electronic watermarking data and said coordinate conversion data [FEATURE ID: 3]

. 2 . An electronic image [FEATURE ID: 3]

reproducing device [FEATURE ID: 12]

which is for use in reproducing an original image from an electronic watermarked image [FEATURE ID: 3]

consisting of a plurality of secondary blocks [FEATURE ID: 6]

, comprising : second discrete cosine transform means for carrying out second discrete cosine transform of at least one of said a plurality of secondary blocks ; extracting means for extracting electronic watermarking data from an output of said second discrete cosine transform means ; extracted data [FEATURE ID: 11]

containing means for containing an output of said extracting means ; correspondence detecting means which compare an output of said extracted data containing means with an original electronic watermarking data [FEATURE ID: 3]

to detect whether or not the output of said extracted data containing means is corresponding with the original electronic watermarking data ; output image [FEATURE ID: 2]

switching means which output said electronic watermarked image when said output of said extracted data containing means is detected to be not corresponding with said original electronic watermarking data by said correspondence detecting means ; and first inverse coordinate [FEATURE ID: 7]

converting means which have first inverse coordinate conversion data [FEATURE ID: 7]

and which carry out first inverse coordinate conversion of said output of said extracted data containing means with reference to said original electronic watermarking data and said first inverse coordinate conversion data when said output of said extracted data containing means is detected to be corresponding with said original electronic watermarking data by said correspondence detecting means . 3 . An electronic watermarking system which is for use in inserting an electronic watermark into an original image consisting of a plurality of primary blocks , comprising : inserting means for inserting electronic watermarking data into at least one of said a plurality of primary blocks ; second coordinate [FEATURE ID: 7]

converting means which have second coordinate conversion data and which carry out second coordinate conversion of an output of said inserting means by said electronic watermarking data and said second coordinate conversion data ; extracting means for extracting electronic watermarking data from a part [FEATURE ID: 5]

of the coordinate [FEATURE ID: 7]

- converted electronic watermarked image ; and second inverse coordinate [FEATURE ID: 7]

converting means which have second inverse coordinate conversion data [FEATURE ID: 8]

and which carry out second inverse coordinate conversion of said output of said extracting means with reference to said original electronic watermarking data and said second inverse coordinate conversion data when said output of said extracting means is corresponding with said original electronic watermarking data . 4 . An electronic watermarking system as claimed in claim [FEATURE ID: 9]

3 , wherein said electronic watermarking data is identical with said original electronic watermarking data , said second coordinate conversion data being identical with said second inverse coordinate conversion data . 5 . An electronic watermarking system as claimed in claim 3 , wherein said second coordinate converting means calculate a coordinate of one of said primary blocks outputted from said inserting means by said electronic watermarking data and said second coordinate conversion data , said coordinate being occupied within a region [FEATURE ID: 5]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20020133499A1
Filed: 2001-03-13
Issued: 2002-09-19
Patent Holder: (Original Assignee) Sean Ward; Isaac Richards     (Current Assignee) Relatable LLC
Inventor(s): Sean Ward, Isaac Richards

Title: System and method for acoustic fingerprinting

[FEATURE ID: 1] method, second image such, stepprocess, result, means, way, manner, system, stage[FEATURE ID: 1] method, steps, step
[TRANSITIVE ID: 2] producingobtaining, providing, recording, defining, indicating, reading, storing[TRANSITIVE ID: 2] keeping, accessing, determining, representing, identifying
[FEATURE ID: 3] composite machine, human, second document, memorycomputer, device, document, machine, data, video, text[FEATURE ID: 3] digital file, database, music files
[TRANSITIVE ID: 4] comprisingincluding, by, containing, involving, having, for, of[TRANSITIVE ID: 4] comprising
[TRANSITIVE ID: 5] generating, compositingproviding, matching, receiving, contrasting, displaying, synchronizing, printing[TRANSITIVE ID: 5] comparing
[FEATURE ID: 6] substratematrix, computer, platform[FEATURE ID: 6] file database
[FEATURE ID: 7] glyphtone cells, distinguishable patterns, invisible print materials, color parameterscontent, information, data, patterns, text, characters, indicia[FEATURE ID: 7] features, sound files, time frames, file features, label, file show
[TRANSITIVE ID: 8] basedlocated, generated, represented, defined[TRANSITIVE ID: 8] stored
[FEATURE ID: 9] grayscale image data values, halftone cellsimages, patterns, features, information, glyph, dots, halftone[FEATURE ID: 9] fingerprints
[FEATURE ID: 10] adjacent visible halftone cellsfeatures, information, data, characteristics, fingerprints, signals[FEATURE ID: 10] file fingerprints, Haar wavelets
[FEATURE ID: 11] second image, supplementary information, additional computer data, composite documentinformation, text, documents, contents, metadata, coordinates, digital[FEATURE ID: 11] digital files
[FEATURE ID: 12] claimstep, claimed, item, clause, paragraph, embodiment, preceding claim[FEATURE ID: 12] claim, method claim
[FEATURE ID: 13] graphical image, font identifiersignature, pattern, watermark, bitmap, font, logo, code[FEATURE ID: 13] fingerprint
[FEATURE ID: 14] digital encodingdescription, duplicate, summary, modification, thumbnail[FEATURE ID: 14] new unique identifier
[FEATURE ID: 15] spatial pointer, location identifier, readable character, pointer, versiondescriptor, reference, location, image, identifier, symbol, name[FEATURE ID: 15] unique identifier
[FEATURE ID: 16] first documentother, all, user, second, digital, document, reference[FEATURE ID: 16] file, subsequent
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable and human [FEATURE ID: 3]

- readable document comprising [TRANSITIVE ID: 4]

: generating [TRANSITIVE ID: 5]

a background image on a substrate [FEATURE ID: 6]

, said background image comprising coded glyphtone cells [FEATURE ID: 7]

based [TRANSITIVE ID: 8]

on grayscale image data values [FEATURE ID: 9]

, each of said halftone cells [FEATURE ID: 9]

comprising one of at least two distinguishable patterns [FEATURE ID: 7]

; compositing [TRANSITIVE ID: 5]

the background image with a second image such [FEATURE ID: 1]

that two or more adjacent visible halftone cells [FEATURE ID: 10]

may be decoded and the second image [FEATURE ID: 11]

may be viewed . 2 . The method of claim [FEATURE ID: 12]

1 , wherein the second image comprises a human - readable image . 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 13]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials . 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 14]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 15]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 15]

and supplementary information [FEATURE ID: 11]

. 10 . The method of claim 9 , wherein the location identifier refers to a point on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 7]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 15]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 13]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer [FEATURE ID: 15]

to additional computer data [FEATURE ID: 11]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 7]

. 17 . A method for comparing a first document [FEATURE ID: 16]

to a second document [FEATURE ID: 3]

, comprising : inputting a composite document [FEATURE ID: 11]

into a memory [FEATURE ID: 3]

, said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 15]

of the second image . 18 . The method of claim 17 , further comprising the step [FEATURE ID: 1]

1 . A method [FEATURE ID: 1]

of keeping [TRANSITIVE ID: 2]

track of access to digital files [FEATURE ID: 11]

, the steps [FEATURE ID: 1]

comprising [TRANSITIVE ID: 4]

: accessing [TRANSITIVE ID: 2]

a digital file [FEATURE ID: 3]

; determining [TRANSITIVE ID: 2]

a fingerprint [FEATURE ID: 13]

for the file , the fingerprint representing [TRANSITIVE ID: 2]

one or more features [FEATURE ID: 7]

of the file ; comparing [TRANSITIVE ID: 5]

the fingerprint for the file to file [TRANSITIVE ID: 16]

fingerprints [FEATURE ID: 9]

stored [TRANSITIVE ID: 8]

in a file database [FEATURE ID: 6]

, the file fingerprints [FEATURE ID: 10]

uniquely identifying [TRANSITIVE ID: 2]

a corresponding digital file and having a corresponding unique identifier [FEATURE ID: 15]

stored in the database [FEATURE ID: 3]

; upon the comparing step [FEATURE ID: 1]

revealing a match between the fingerprint for the file and a stored fingerprint , outputting the corresponding unique identifier for the corresponding digital file ; and upon the comparing step revealing no match between the fingerprint for the file and a stored fingerprint , storing the fingerprint in the database , generating a new unique identifier [FEATURE ID: 14]

for the file , and storing the new unique identifier for the file . 2 . The method of claim [FEATURE ID: 12]

1 wherein the digital files represent sound files [FEATURE ID: 7]

. 3 . The method of claim 2 wherein the digital files represent music files [FEATURE ID: 3]

. 4 . The method of claim 3 wherein the features represented by the fingerprint include features selected from the group consisting of : spectral residuals ; and transforms of Haar wavelets [FEATURE ID: 10]

. 5 . The method of claim 4 wherein the features represented by the fingerprint include spectral residuals and transforms of Haar wavelets . 6 . The method of claim 1 wherein the step of determining the fingerprint of the file includes generating time frames [FEATURE ID: 7]

for the file and determining file features [FEATURE ID: 7]

within the time frames . 7 . A method of keeping track of access to digital files , the steps comprising : accessing a digital file ; determining a fingerprint for the file , the fingerprint representing one or more features of the file , the features include features selected from the group consisting of : spectral residuals ; and transforms of Haar wavelets ; comparing the fingerprint for the file to file fingerprints stored in a file database , the file fingerprints uniquely identifying a corresponding digital file and having a corresponding unique identifier stored in the database ; upon the comparing step revealing a match between the fingerprint for the file and a stored fingerprint , outputting the corresponding unique identifier for the corresponding digital file . 8 . The method claim [FEATURE ID: 12]

7 wherein the digital files represent sound files . 9 . The method claim 7 wherein the digital files represent music files . 10 . The method of claim 9 further comprising the step of : upon the comparing step revealing no match between the fingerprint for the file and a stored fingerprint , storing the fingerprint in the database , generating a new unique identifier for the file , and storing the new unique identifier for the file . 11 . The method of claim 10 wherein the features represented by the fingerprint include spectral residuals and transforms of Haar wavelets . 12 . The method of claim 7 wherein the features represented by the fingerprint include spectral residuals and transforms of Haar wavelets . 13 . A method of keeping track of access to digital files , the steps comprising : accessing a digital file ; determining a fingerprint for the file , the fingerprint representing one or more features of the file ; comparing the fingerprint for the file to file fingerprints stored in a file database , the file fingerprints uniquely identifying a corresponding digital file and having a corresponding unique identifier stored in the database ; upon the comparing step revealing a match between the fingerprint for the file and a stored fingerprint , outputting the corresponding unique identifier for the corresponding digital file ; and storing any label [FEATURE ID: 7]

applied to the file ; and automatically correcting a label applied to a file if subsequent [FEATURE ID: 16]

accesses to the file show [FEATURE ID: 7]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20020131076A1
Filed: 1999-06-29
Issued: 2002-09-19
Patent Holder: (Original Assignee) Digimarc Corp     (Current Assignee) Digimarc Corp
Inventor(s): Bruce Davis

Title: Distribution and use of trusted photos

[FEATURE ID: 1] method, second image suchway, manner, methodology, technique, system, means, form[FEATURE ID: 1] method, document printing method
[TRANSITIVE ID: 2] producing, compositingproviding, obtaining, transmitting, generating, matching, issuing, masking[TRANSITIVE ID: 2] printing, receiving, processing
[FEATURE ID: 3] composite machine, glyphtone cells, grayscale image data values, adjacent visible halftone cells, glyphs, location identifier, supplementary information, area, invisible print materials, additional computer data, first document, composite document, first imagedocument, image, information, content, digital, ink, indicia[FEATURE ID: 3] trusted, user, driver license photo, data structure, personal image, digital photo, photo, data, text, remote computer, computer
[FEATURE ID: 4] readable, based, memoryformed, generated, printed, accessible, computer, database, created[FEATURE ID: 4] maintained
[FEATURE ID: 5] humanuser, operator, information, image[FEATURE ID: 5] individual
[TRANSITIVE ID: 6] comprisingincluding, containing, having, of, by, involving, with[TRANSITIVE ID: 6] comprising, soliciting, depicting
[TRANSITIVE ID: 7] generatingprinting, storing, rendering, representing, providing, receiving[TRANSITIVE ID: 7] electronic transmission, watermarking
[FEATURE ID: 8] background image, second image, font identifier, versionoutput, code, message, background, watermark, pattern, print[FEATURE ID: 8] image, identification code
[FEATURE ID: 9] substratemedium, print, form, paper, sheet, device, page[FEATURE ID: 9] document
[TRANSITIVE ID: 10] saidthe, this, each, such[TRANSITIVE ID: 10] said
[FEATURE ID: 11] claimembodiment, clause, claimed, claim of, the claim, step, invention[FEATURE ID: 11] claim
[TRANSITIVE ID: 12] comprisesutilizes, creates, uses, identifies, produces, provides, forms[TRANSITIVE ID: 12] prints
[FEATURE ID: 13] readable imageid, identifier, image[FEATURE ID: 13] index
[FEATURE ID: 14] graphical imagelabel, signature, photo, photograph, letter, document[FEATURE ID: 14] photo identification document
[FEATURE ID: 15] portion, digital encodingsubset, segment, component, complement, majority, duplicate, fraction[FEATURE ID: 15] part
[FEATURE ID: 16] spatial pointerimage, address, application, object, album, index, identifier[FEATURE ID: 16] archive, identification badge
[FEATURE ID: 17] readable characteridentification, identifier, identity, image, input, name, attachment[FEATURE ID: 17] stored, electronic request, identification code prior
[FEATURE ID: 18] second documentrepository, dictionary, memory, library, registry, file[FEATURE ID: 18] data stricture
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable [FEATURE ID: 4]

and human [FEATURE ID: 5]

- readable document comprising [TRANSITIVE ID: 6]

: generating [TRANSITIVE ID: 7]

a background image [FEATURE ID: 8]

on a substrate [FEATURE ID: 9]

, said [TRANSITIVE ID: 10]

background image comprising coded glyphtone cells [FEATURE ID: 3]

based [TRANSITIVE ID: 4]

on grayscale image data values [FEATURE ID: 3]

, each of said halftone cells comprising one of at least two distinguishable patterns ; compositing [TRANSITIVE ID: 2]

the background image with a second image such [FEATURE ID: 1]

that two or more adjacent visible halftone cells [FEATURE ID: 3]

may be decoded and the second image [FEATURE ID: 8]

may be viewed . 2 . The method of claim [FEATURE ID: 11]

1 , wherein the second image comprises [TRANSITIVE ID: 12]

a human - readable image [FEATURE ID: 13]

. 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 14]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 15]

of the background image is printed using glyphs [FEATURE ID: 3]

. 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials . 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 15]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 16]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 3]

and supplementary information [FEATURE ID: 3]

. 10 . The method of claim 9 , wherein the location identifier refers to a point on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 3]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 3]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 17]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 8]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 3]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters . 17 . A method for comparing a first document [FEATURE ID: 3]

to a second document [FEATURE ID: 18]

, comprising : inputting a composite document [FEATURE ID: 3]

into a memory [FEATURE ID: 4]

, said composite document comprised of a first image [FEATURE ID: 3]

overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 8]

1 . A method [FEATURE ID: 1]

of printing [TRANSITIVE ID: 2]

a trusted [TRANSITIVE ID: 3]

image [FEATURE ID: 8]

, comprising [TRANSITIVE ID: 6]

: an individual user electronically contacting a governmental agency , soliciting [TRANSITIVE ID: 6]

an image depicting [TRANSITIVE ID: 6]

the user [FEATURE ID: 3]

stored [TRANSITIVE ID: 17]

in an archive [FEATURE ID: 16]

maintained [TRANSITIVE ID: 4]

by said [TRANSITIVE ID: 10]

governmental agency ; electronically receiving [TRANSITIVE ID: 2]

said image from said contacted governmental agency ; and printing a document [FEATURE ID: 9]

incorporating said image . 2 . The method of claim [FEATURE ID: 11]

1 in which it is the individual user who receives said image and prints [FEATURE ID: 12]

said document . 3 . The method of claim 1 in which said document is a photo identification document [FEATURE ID: 14]

. 4 . The method of claim 1 in which said document is an identification badge [FEATURE ID: 16]

. 5 . The method of claim 1 in which the governmental agency is a motor vehicle licensing agency , and the image is a driver license photo [FEATURE ID: 3]

. 6 . The method of claim 1 in which said image is processed with an identification code [FEATURE ID: 8]

by the governmental agency . 7 . The method of claim 1 in which said image is digitally watermarked with a plural - bit code by the governmental agency . 8 . The method of claim 7 in which said plural - bit code serves to identify the individual user ' s name . 9 . The method of claim 8 in which said plural - bit code comprises an index [FEATURE ID: 13]

into a data structure [FEATURE ID: 3]

in which the individual user ' s name is stored . 10 . A document printed according to the method of 1 . 11 . A method of distributing a trusted image , comprising : at a governmental agency , receiving an electronic request [FEATURE ID: 17]

for an archived personal image [FEATURE ID: 3]

from an individual [FEATURE ID: 5]

depicted in said image ; and electronically transmitting said image to said individual . 12 . The method of claim 11 that includes processing [FEATURE ID: 2]

said image with an identification code prior [FEATURE ID: 17]

to said electronic transmission [FEATURE ID: 7]

. 13 . The method of claim 11 that includes digitally watermarking [FEATURE ID: 7]

said image with a plural - bit code prior to said electronic transmission . 14 . The method of claim 13 in which said plural - bit code serves to identify the individual ' s name . 15 . The method of claim 14 in which said plural - bit code comprises an index into a data stricture [FEATURE ID: 18]

in which the individual ' s name is stored . 16 . A document printing method [FEATURE ID: 1]

, comprising : receiving a digital photo [FEATURE ID: 3]

, the photo [FEATURE ID: 3]

having plural - bit data steganographically encoded therein ; by reference to said steganographically encoded data [FEATURE ID: 3]

, generating text [FEATURE ID: 3]

to be printed with said photo ; and printing a document including both said photo and said text . 17 . The method of claim 16 that includes electronically transmitting at least a part [FEATURE ID: 15]

of said plural - bit data to a remote computer [FEATURE ID: 3]

, and receiving the text from said computer [FEATURE ID: 3]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6449379B1
Filed: 1993-11-18
Issued: 2002-09-10
Patent Holder: (Original Assignee) Digimarc Corp     (Current Assignee) Digimarc Corp
Inventor(s): Geoffrey B. Rhoads

Title: Video steganography methods avoiding introduction of fixed pattern noise

[FEATURE ID: 1] method, substrate, memory, stepprocess, device, mode, way, result, receiver, technique[FEATURE ID: 1] method, recording capability, strength
[TRANSITIVE ID: 2] producing, comprising, generatingrepresenting, defining, displaying, providing, forming, carrying, of[TRANSITIVE ID: 2] comprising, including, manifesting
[FEATURE ID: 3] composite machine, readable image, graphical image, digital encoding, spatial pointer, supplementary information, first document, composite documenttext, image, content, character, message, symbol, template[FEATURE ID: 3] video, original video data, video data
[FEATURE ID: 4] human, additional computer datainformation, image, video, text, audio, data, raw video[FEATURE ID: 4] original video, message data
[FEATURE ID: 5] background imageimage, message, display, signal[FEATURE ID: 5] video frame
[TRANSITIVE ID: 6] saidthe, this, each, such[TRANSITIVE ID: 6] said
[FEATURE ID: 7] glyphtone cells, grayscale image data values, halftone cells, distinguishable patterns, adjacent visible halftone cells, human invisible print materials, invisible print materialsdata, symbols, bits, regions, patterns, information, values[FEATURE ID: 7] bit message data, plural frames, plural rows, rows, pixels, certain
[TRANSITIVE ID: 8] basedoperable, operating, defined[TRANSITIVE ID: 8] characterized
[TRANSITIVE ID: 9] compositingcombining, modifying, linking, masking, replacing[TRANSITIVE ID: 9] encoding
[FEATURE ID: 10] second image suchfirst, second, message, representation, image, feature, property[FEATURE ID: 10] second frame, part
[FEATURE ID: 11] second image, areaother, item, article, second, element, entity, same[FEATURE ID: 11] same plural, associated video apparatus
[FEATURE ID: 12] claimclaimed, item, clause, paragraph, embodiment, patent, step[FEATURE ID: 12] claim
[FEATURE ID: 13] portion, location identifier, font identifierlocation, number, shape, marker, component, symbol, header[FEATURE ID: 13] luminance value
[FEATURE ID: 14] pointspot, site, boundary[FEATURE ID: 14] constant strength
[FEATURE ID: 15] readable characterfeature, parameter, characteristic, format, state, identity, quality[FEATURE ID: 15] representation, local attribute
[FEATURE ID: 16] color parametersalgorithms, characteristics, rules, keys, parameters[FEATURE ID: 16] different noise data
[FEATURE ID: 17] versionimage, output, encoding[FEATURE ID: 17] encoded
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable and human [FEATURE ID: 4]

- readable document comprising [TRANSITIVE ID: 2]

: generating [TRANSITIVE ID: 2]

a background image [FEATURE ID: 5]

on a substrate [FEATURE ID: 1]

, said [TRANSITIVE ID: 6]

background image comprising coded glyphtone cells [FEATURE ID: 7]

based [TRANSITIVE ID: 8]

on grayscale image data values [FEATURE ID: 7]

, each of said halftone cells [FEATURE ID: 7]

comprising one of at least two distinguishable patterns [FEATURE ID: 7]

; compositing [TRANSITIVE ID: 9]

the background image with a second image such [FEATURE ID: 10]

that two or more adjacent visible halftone cells [FEATURE ID: 7]

may be decoded and the second image [FEATURE ID: 11]

may be viewed . 2 . The method of claim [FEATURE ID: 12]

1 , wherein the second image comprises a human - readable image [FEATURE ID: 3]

. 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 3]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 13]

of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials [FEATURE ID: 7]

. 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 3]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 3]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 13]

and supplementary information [FEATURE ID: 3]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 14]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 11]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 7]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 15]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 13]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 4]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 16]

. 17 . A method for comparing a first document [FEATURE ID: 3]

to a second document , comprising : inputting a composite document [FEATURE ID: 3]

into a memory [FEATURE ID: 1]

, said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 17]

of the second image . 18 . The method of claim 17 , further comprising the step [FEATURE ID: 1]

1 . A method [FEATURE ID: 1]

of steganographically encoding [TRANSITIVE ID: 9]

an original video [FEATURE ID: 4]

with plural - bit message data [FEATURE ID: 7]

to yield an encoded [TRANSITIVE ID: 17]

video [FEATURE ID: 3]

, the original video comprising [TRANSITIVE ID: 2]

plural frames [FEATURE ID: 7]

, each frame including [TRANSITIVE ID: 2]

plural rows [FEATURE ID: 7]

of original video data [FEATURE ID: 3]

, the method characterized [TRANSITIVE ID: 8]

by encoding plural - bit message data in each of first and second frames , but manifesting [TRANSITIVE ID: 2]

said [TRANSITIVE ID: 6]

encoding differently in said first and second frames by changing the representation [FEATURE ID: 15]

of the message data [FEATURE ID: 4]

being encoded . 2 . The method of claim [FEATURE ID: 12]

1 in which the first and second frames are sequential frames . 3 . The method of claim 1 in which said representation is changed by reason of different noise data [FEATURE ID: 16]

used in encoding said first and second frames . 4 . The method of claim 1 in which the first frame is encoded with a first plural - bit message , and the second frame [FEATURE ID: 10]

is encoded with the same plural [FEATURE ID: 11]

- bit message . 5 . The method of claim 1 in which the encoding alters video data [FEATURE ID: 3]

in substantially each row of video data in the first frame , and alters video data in substantially each row of video data in the second frame . 6 . The method of claim 5 in which substantially each encoded row of video in the first frame has a non-identical counterpart encoded row of video in the second frame , even if the corresponding rows [FEATURE ID: 7]

of original video data in the first and second frames are identical . 7 . The method of claim 1 that further includes disabling a recording capability [FEATURE ID: 1]

of an associated video apparatus [FEATURE ID: 11]

in response to detection of at least a part [FEATURE ID: 10]

of said plural - bit message data . 8 . The method of claim 1 wherein the video is processed to redundantly encode the plural - bit message data throughout substantially all of a video frame [FEATURE ID: 5]

. 9 . The method of claim 1 that includes changing the strength [FEATURE ID: 1]

of encoding in accordance with a local attribute [FEATURE ID: 15]

of the video , rather than encoding at a constant strength [FEATURE ID: 14]

across each video frame . 10 . The method of claim 1 in which the video comprises pixels [FEATURE ID: 7]

, each with a luminance value [FEATURE ID: 13]

, and in which certain [FEATURE ID: 7]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6449377B1
Filed: 1995-05-08
Issued: 2002-09-10
Patent Holder: (Original Assignee) Digimarc Corp     (Current Assignee) Digimarc Corp
Inventor(s): Geoffrey B. Rhoads

Title: Methods and systems for watermark processing of line art images

[FEATURE ID: 1] methodmanufacturing method, production method, system, methodology, printing method, computerized method, first method[FEATURE ID: 1] method, process
[TRANSITIVE ID: 2] producing, generating, compositingdisplaying, printing, forming, constructing, creating, rendering, processing[TRANSITIVE ID: 2] embedding, providing, defining, having, changing, machine identification
[FEATURE ID: 3] composite machine, halftone cells, distinguishable patterns, adjacent visible halftone cells, graphical image, glyphs, human invisible print materials, supplementary information, invisible print materials, readable character, additional computer data, color parameters, first document, composite document, memorytext, graphics, indicia, information, data, pixels, symbols[FEATURE ID: 3] binary data, nominal line art, human viewer, visible light scan data, new lines, initial banknote artwork, plural artwork elements
[FEATURE ID: 4] readable, readable document, readable imageviewable, legible, visible, recognizable, interpretable, identifiable, perceptible[FEATURE ID: 4] apparent
[FEATURE ID: 5] humaninformation, image, data[FEATURE ID: 5] artwork
[TRANSITIVE ID: 6] comprisingincluding, includes, comprises, by, containing, involving, having[TRANSITIVE ID: 6] comprising
[FEATURE ID: 7] background image, substrate, pointdocument, surface, paper, matrix, bill, glyph, currency[FEATURE ID: 7] banknote, line art
[TRANSITIVE ID: 8] saidthe, this, each, such[TRANSITIVE ID: 8] said
[TRANSITIVE ID: 9] codedplural, multiple, individual[TRANSITIVE ID: 9] certain
[FEATURE ID: 10] glyphtone cellspixels, areas, portions, features, elements, segments, points[FEATURE ID: 10] virtual regions, regions, plural, changes, plural lines, artwork elements, lines
[FEATURE ID: 11] grayscale image data values, portion, location identifier, font identifiercolor, number, location, pattern, magnitude, shape, region[FEATURE ID: 11] luminance value, luminance, width, position
[FEATURE ID: 12] second image suchsecond, manner, way, sense[FEATURE ID: 12] second direction different
[TRANSITIVE ID: 13] decoded, vieweddistinguished, identified, sensed, interpreted, read, recognized, seen[TRANSITIVE ID: 13] detected
[FEATURE ID: 14] second image, spatial pointer, versionimage, output, overlay, pattern, article, second, other[FEATURE ID: 14] excerpt, first
[FEATURE ID: 15] claimformula, step, claim of, the claim, fig claim, item, figure[FEATURE ID: 15] claim
[FEATURE ID: 16] areaenvelope, opening, array, aperture, image, outline, amount[FEATURE ID: 16] area
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable [FEATURE ID: 4]

and human [FEATURE ID: 5]

- readable document [FEATURE ID: 4]

comprising [TRANSITIVE ID: 6]

: generating [TRANSITIVE ID: 2]

a background image [FEATURE ID: 7]

on a substrate [FEATURE ID: 7]

, said [TRANSITIVE ID: 8]

background image comprising coded [TRANSITIVE ID: 9]

glyphtone cells [FEATURE ID: 10]

based on grayscale image data values [FEATURE ID: 11]

, each of said halftone cells [FEATURE ID: 3]

comprising one of at least two distinguishable patterns [FEATURE ID: 3]

; compositing [TRANSITIVE ID: 2]

the background image with a second image such [FEATURE ID: 12]

that two or more adjacent visible halftone cells [FEATURE ID: 3]

may be decoded [TRANSITIVE ID: 13]

and the second image [FEATURE ID: 14]

may be viewed [TRANSITIVE ID: 13]

. 2 . The method of claim [FEATURE ID: 15]

1 , wherein the second image comprises a human - readable image [FEATURE ID: 4]

. 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 3]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 11]

of the background image is printed using glyphs [FEATURE ID: 3]

. 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials [FEATURE ID: 3]

. 7 . The method of claim 1 , wherein the background image comprises a digital encoding of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 14]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 11]

and supplementary information [FEATURE ID: 3]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 7]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 16]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 3]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 3]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 11]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 3]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 3]

. 17 . A method for comparing a first document [FEATURE ID: 3]

to a second document , comprising : inputting a composite document [FEATURE ID: 3]

into a memory [FEATURE ID: 3]

, said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 14]

1 . A method [FEATURE ID: 1]

of embedding [TRANSITIVE ID: 2]

binary data [FEATURE ID: 3]

in a banknote [FEATURE ID: 7]

, comprising [TRANSITIVE ID: 6]

: providing [TRANSITIVE ID: 2]

nominal line art [FEATURE ID: 3]

for the banknote ; defining [TRANSITIVE ID: 2]

a plurality of virtual regions [FEATURE ID: 10]

in at least an excerpt [FEATURE ID: 14]

of said [TRANSITIVE ID: 8]

line art [FEATURE ID: 7]

, each of said regions [FEATURE ID: 10]

having [TRANSITIVE ID: 2]

an area [FEATURE ID: 16]

less than 0.001 square inches ; and changing [TRANSITIVE ID: 2]

a luminance value [FEATURE ID: 11]

of plural [FEATURE ID: 10]

of said regions to embed binary data therein , wherein said changes [FEATURE ID: 10]

are not apparent [FEATURE ID: 4]

to a human viewer [FEATURE ID: 3]

of the banknote , yet can be detected [TRANSITIVE ID: 13]

from visible light scan data [FEATURE ID: 3]

corresponding to said banknote . 2 . The method of claim [FEATURE ID: 15]

1 further comprising changing said luminance [FEATURE ID: 11]

by modulating the width [FEATURE ID: 11]

of plural lines [FEATURE ID: 10]

in said line art . 3 . The method of claim 1 further comprising changing said luminance by modulating the position [FEATURE ID: 11]

of plural lines in said line art . 4 . The method of claim 1 further comprising changing said luminance by inserting new lines [FEATURE ID: 3]

in said line art . 5 . A banknote produced by the process [FEATURE ID: 1]

of claim 1 . 6 . A method for encoding plural - bit digital data in a banknote , to facilitate later machine identification [FEATURE ID: 2]

of the banknote , comprising : receiving initial banknote artwork [FEATURE ID: 3]

including plural artwork elements [FEATURE ID: 3]

; changing the position o r dimension of certain [FEATURE ID: 9]

of said artwork elements [FEATURE ID: 10]

to steganographically encode said plural - bit digital data , yielding adjusted banknote artwork [FEATURE ID: 5]

; and printing a banknote corresponding to said adjusted banknote artwork . 7 . The method of claim 6 in which the initial banknote artwork includes line art , and the method includes slightly changing the positions of lines [FEATURE ID: 10]

comprising said artwork to encode said plural - bit data . 8 . The method of claim 7 that includes , for at least one line , changing its position in a first direction at a first region therealong , and changing its position in a second direction different [FEATURE ID: 12]

than the first [FEATURE ID: 14]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20020122568A1
Filed: 1998-04-30
Issued: 2002-09-05
Patent Holder: (Original Assignee) Jian Zhao     
Inventor(s): Jian Zhao

Title: Digital authentication with digital and analog documents

[FEATURE ID: 1] methoddevice, mechanism, system, means, processing apparatus, structure, information[FEATURE ID: 1] Apparatus, apparatus, storage system
[TRANSITIVE ID: 2] producing, generating, compositingproviding, displaying, defining, receiving, using, representing, encoding[TRANSITIVE ID: 2] including, employing
[FEATURE ID: 3] composite machine, readable imagecomputer, user, text, memory, graphic, device, symbol[FEATURE ID: 3] processor, semantic information reader, network, security pattern, digital signature, watermark, communications system, verification system, photo ID
[FEATURE ID: 4] humandocument, operator, user, data, information, ink[FEATURE ID: 4] analog form, associated read security pattern
[TRANSITIVE ID: 5] comprisingincluding, includes, comprises, containing, involving, having[TRANSITIVE ID: 5] comprising
[FEATURE ID: 6] background image, substrate, portion, digital encoding, point, first document, second document, composite document, versionbackground, first, template, surface, sample, signature, computer[FEATURE ID: 6] digital representation, graphic
[FEATURE ID: 7] glyphtone cells, grayscale image data values, halftone cells, distinguishable patterns, adjacent visible halftone cells, human invisible print materials, supplementary information, invisible print materials, additional computer data, color parametersinformation, indicia, symbols, pixels, patterns, content, elements[FEATURE ID: 7] object, first authentication information, second authentication information, reference codes, semantic information, apparatuses, part
[TRANSITIVE ID: 8] based, decodedstored, matched, mapped, identified, compared, combined, encoded[TRANSITIVE ID: 8] associated
[FEATURE ID: 9] second image suchnotice, representation, result[FEATURE ID: 9] notification
[FEATURE ID: 10] second imageoverlay, output, object, article[FEATURE ID: 10] image
[TRANSITIVE ID: 11] vieweddetermined, retrieved, generated[TRANSITIVE ID: 11] stored
[FEATURE ID: 12] claimfigure, requirement, item, patent, paragraph, embodiment, statement[FEATURE ID: 12] claim
[TRANSITIVE ID: 13] comprisesutilizes, uses, identifies, provides[TRANSITIVE ID: 13] receives
[FEATURE ID: 14] graphical image, location identifier, font identifier, pointercode, label, descriptor, symbol, tag, signature, value[FEATURE ID: 14] reference code
[FEATURE ID: 15] spatial pointerimage, object, identifier[FEATURE ID: 15] analog form converter
[FEATURE ID: 16] areainterface, address, element[FEATURE ID: 16] authentication information reader
[FEATURE ID: 17] readable characteridentifier, identification, identity[FEATURE ID: 17] indication
[FEATURE ID: 18] memorycomputer, receiver, database, host, terminal, controller[FEATURE ID: 18] user
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable and human [FEATURE ID: 4]

- readable document comprising [TRANSITIVE ID: 5]

: generating [TRANSITIVE ID: 2]

a background image [FEATURE ID: 6]

on a substrate [FEATURE ID: 6]

, said background image comprising coded glyphtone cells [FEATURE ID: 7]

based [TRANSITIVE ID: 8]

on grayscale image data values [FEATURE ID: 7]

, each of said halftone cells [FEATURE ID: 7]

comprising one of at least two distinguishable patterns [FEATURE ID: 7]

; compositing [TRANSITIVE ID: 2]

the background image with a second image such [FEATURE ID: 9]

that two or more adjacent visible halftone cells [FEATURE ID: 7]

may be decoded [TRANSITIVE ID: 8]

and the second image [FEATURE ID: 10]

may be viewed [TRANSITIVE ID: 11]

. 2 . The method of claim [FEATURE ID: 12]

1 , wherein the second image comprises [TRANSITIVE ID: 13]

a human - readable image [FEATURE ID: 3]

. 3 . The method of claim 1 , wherein the second image comprises a graphical image [FEATURE ID: 14]

. 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 6]

of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials [FEATURE ID: 7]

. 7 . The method of claim 1 , wherein the background image comprises a digital encoding [FEATURE ID: 6]

of the second image . 8 . The method of claim 1 , wherein the background image includes at least one spatial pointer [FEATURE ID: 15]

. 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 14]

and supplementary information [FEATURE ID: 7]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 6]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 16]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 7]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 17]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier [FEATURE ID: 14]

. 15 . The method of claim 9 , wherein the supplementary information is a pointer [FEATURE ID: 14]

to additional computer data [FEATURE ID: 7]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 7]

. 17 . A method for comparing a first document [FEATURE ID: 6]

to a second document [FEATURE ID: 6]

, comprising : inputting a composite document [FEATURE ID: 6]

into a memory [FEATURE ID: 18]

, said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version [FEATURE ID: 6]

1 . Apparatus [FEATURE ID: 1]

for determining authenticity of a digital representation [FEATURE ID: 6]

of an object [FEATURE ID: 7]

, the digital representation including [TRANSITIVE ID: 2]

embedded first authentication information [FEATURE ID: 7]

and the apparatus [FEATURE ID: 1]

comprising [TRANSITIVE ID: 5]

: a storage system [FEATURE ID: 1]

in which stored [TRANSITIVE ID: 11]

second authentication information [FEATURE ID: 7]

is associated [TRANSITIVE ID: 8]

with stored reference codes [FEATURE ID: 7]

; and a processor [FEATURE ID: 3]

which receives [TRANSITIVE ID: 13]

the digital representation and a reference code [FEATURE ID: 14]

associated therewith , the processor including an authentication information reader [FEATURE ID: 16]

and the processor employing [TRANSITIVE ID: 2]

the reference code to retrieve the second authentication information associated therewith from the storage system , employing the authentication information reader to read the embedded first authentication information , and employing the read first authentication information and the second authentication information to determine authenticity of the digital representation . 2 . The apparatus set forth in claim [FEATURE ID: 12]

1 wherein : the reference code is included in the digital representation . 3 . The apparatus set forth in claim 1 wherein : a key is stored in the storage system and associated with the reference code ; and the processor further employs the reference code to retrieve the key ; and the authentication information reader uses the key to read the first authentication information . 4 . The apparatus set forth in claim 1 wherein : the second authentication information is based on semantic information [FEATURE ID: 7]

contained in the digital representation ; and the authentication information reader includes a semantic information reader [FEATURE ID: 3]

and an authentication information maker , the semantic information reader reading the semantic information from the digital representation and the authentication information maker producing the first authentication information from the read semantic information . 5 . The apparatus set forth in claim 1 wherein : the processor is attached to a network [FEATURE ID: 3]

, receives the digital representation from a source thereof via the network , and provides an indication [FEATURE ID: 17]

of the authenticity of the digital representation to the source . 6 . The apparatus set forth in claim 5 wherein : the source makes the digital representation from an analog form [FEATURE ID: 4]

. 7 . The apparatus set forth in claim 6 wherein : the source associates the reference code with the digital representation . 8 . The apparatus set forth in claim 7 wherein : the source receives the reference code from a user [FEATURE ID: 18]

of the source . 9 . The apparatus set forth in claim 6 wherein : the analog form includes a security pattern [FEATURE ID: 3]

; the source reads the security pattern and associates the read security pattern with the digital representation ; and the authentication information reader further processes the embedded first authentication information with the associated read security pattern [FEATURE ID: 4]

to produce the read first authentication information . 10 . The apparatus set forth in claim 5 wherein : there is a plurality of the apparatuses [FEATURE ID: 7]

in the network ; and a given one of the apparatuses uses the reference code to route the received digital representation and the reference code to another one of the apparatuses . 11 . The apparatus set forth in claim 6 wherein : the embedded first authentication information is a digital signature [FEATURE ID: 3]

embedded as a watermark [FEATURE ID: 3]

in a graphic [FEATURE ID: 6]

on the analog form . 12 . Apparatus for checking the authenticity of an analog form , the analog form including embedded first authentication information and the apparatus comprising : an analog form converter [FEATURE ID: 15]

that receives the analog form and makes a digital representation of at least the first authentication information ; and a communications system [FEATURE ID: 3]

, the analog form converter employing the communications system to send the digital representation and a reference code to a verification system [FEATURE ID: 3]

that employs the reference code and the first authentication information to determine whether the analog form is authentic and to receive a notification [FEATURE ID: 9]

whether the analog form is authentic from the verification system . 13 . The apparatus set forth in claim 12 wherein : the reference code is included in the digital representation . 14 . The apparatus set forth in claim 12 wherein : the reference code is sent in association with but not as part [FEATURE ID: 7]

of the digital representation . 15 . The apparatus set forth in claim 12 wherein : the verification system employs the reference code to locate a key that is required to read the first authentication information . 16 . The apparatus set forth in claim 12 wherein : the verification system employs the reference code to locate second authentication information and additionally uses the second authentication information to determine whether the digital representation is authentic . 17 . The apparatus set forth in claim 12 wherein : the analog form converter analyzes the digital representation to determine whether the verification system can check the authenticity of the digital representation before sending the digital representation . 18 . The apparatus set forth in claim 12 wherein : the analog form includes an image [FEATURE ID: 10]

in which the first authentication information is embedded . 19 . The apparatus set forth in claim 18 wherein : the analog form is a photo ID [FEATURE ID: 3]








Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US6442283B1
Filed: 1999-01-11
Issued: 2002-08-27
Patent Holder: (Original Assignee) Digimarc Corp     (Current Assignee) Digimarc Corp
Inventor(s): Ahmed Tewfik, Mitchell D. Swanson, Bin Zhu

Title: Multimedia data embedding

1 1







Targeted Patent:

Patent: US6641053B1
Filed: 2002-10-16
Issued: 2003-11-04
Patent Holder: (Original Assignee) Xerox Corp     (Current Assignee) BASSFIELD IP LLC
Inventor(s): Jeff Breidenbach, David L. Hecht

Title: Foreground/background document processing with dataglyphs

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20020112171A1
Filed: 1995-02-13
Issued: 2002-08-15
Patent Holder: (Original Assignee) Intertrust Technologies Corp     (Current Assignee) Intertrust Technologies Corp
Inventor(s): Karl Ginter, Victor Shear, Francis Spahn, David Van Wie

Title: Systems and methods for secure transaction management and electronic rights protection

[FEATURE ID: 1] method, second image suchprocedure, technique, step, service, use, system, transaction[FEATURE ID: 1] operating process, component assembly, process, steps, method, system process, operation
[TRANSITIVE ID: 2] producing, generatingpreparing, defining, providing, establishing, creating, developing, obtaining[TRANSITIVE ID: 2] retrieving, checking, performing
[FEATURE ID: 3] composite machine, human, substrate, portion, point, area, additional computer data, composite document, memorycomputer, machine, document, device, region, software, location[FEATURE ID: 3] secure component, processing environment, secure operating system environment
[TRANSITIVE ID: 4] comprisingof, involves, includes, wherein, for, with, having[TRANSITIVE ID: 4] including, comprises
[FEATURE ID: 5] glyphtone cells, grayscale image data values, distinguishable patterns, adjacent visible halftone cells, supplementary information, invisible print materials, readable character, color parametersdata, content, indicia, symbols, text, pixels, patterns[FEATURE ID: 5] executable code, directions, information, permissions record, code, assembly directors
[TRANSITIVE ID: 6] compositingcombining, merging, modifying, linking, integrating, coupling[TRANSITIVE ID: 6] using
[FEATURE ID: 7] second imageobject, output, article[FEATURE ID: 7] executable program
[FEATURE ID: 8] claimclaimed, paragraph, need, clair, item, figure, step[FEATURE ID: 8] claim
[TRANSITIVE ID: 9] comprisescontains, describes, indicates, identifies, includes, defines, represents[TRANSITIVE ID: 9] specifies
[FEATURE ID: 10] leastminus, most, east, last, lest, lease, lo least[FEATURE ID: 10] least
[FEATURE ID: 11] location identifiervalue, identifier, code[FEATURE ID: 11] decryption key
[FEATURE ID: 12] first documentfile, document, template[FEATURE ID: 12] load module
[FEATURE ID: 13] second documentcontrol, memory, signature[FEATURE ID: 13] security wrapper
[FEATURE ID: 14] stepstep of, method, function, stage, method step, steps[FEATURE ID: 14] step
1 . A method [FEATURE ID: 1]

of producing [TRANSITIVE ID: 2]

a composite machine [FEATURE ID: 3]

- readable and human [FEATURE ID: 3]

- readable document comprising [TRANSITIVE ID: 4]

: generating [TRANSITIVE ID: 2]

a background image on a substrate [FEATURE ID: 3]

, said background image comprising coded glyphtone cells [FEATURE ID: 5]

based on grayscale image data values [FEATURE ID: 5]

, each of said halftone cells comprising one of at least two distinguishable patterns [FEATURE ID: 5]

; compositing [TRANSITIVE ID: 6]

the background image with a second image such [FEATURE ID: 1]

that two or more adjacent visible halftone cells [FEATURE ID: 5]

may be decoded and the second image [FEATURE ID: 7]

may be viewed . 2 . The method of claim [FEATURE ID: 8]

1 , wherein the second image comprises [TRANSITIVE ID: 9]

a human - readable image . 3 . The method of claim 1 , wherein the second image comprises a graphical image . 4 . The method of claim 1 , wherein the second image is spatially registered with the background image . 5 . The method of claim 1 , wherein at least a portion [FEATURE ID: 3]

of the background image is printed using glyphs . 6 . The method of claim 1 , wherein at least a portion of the background image is printed using human invisible print materials . 7 . The method of claim 1 , wherein the background image comprises a digital encoding of the second image . 8 . The method of claim 1 , wherein the background image includes at least [FEATURE ID: 10]

one spatial pointer . 9 . The method of claim 8 , wherein the spatial pointer includes a location identifier [FEATURE ID: 11]

and supplementary information [FEATURE ID: 5]

. 10 . The method of claim 9 , wherein the location identifier refers to a point [FEATURE ID: 3]

on the substrate . 11 . The method of claim 9 , wherein the location identifier refers to an area [FEATURE ID: 3]

on the substrate . 12 . The method of claim 11 wherein the area comprises human - invisible print materials [FEATURE ID: 5]

. 13 . The method of claim 9 , wherein the supplementary information defines a human - readable character [FEATURE ID: 5]

. 14 . The method of claim 13 , wherein the supplementary information includes a font identifier . 15 . The method of claim 9 , wherein the supplementary information is a pointer to additional computer data [FEATURE ID: 3]

. 16 . The method of claim 9 , wherein the supplementary information defines one or more color parameters [FEATURE ID: 5]

. 17 . A method for comparing a first document [FEATURE ID: 12]

to a second document [FEATURE ID: 13]

, comprising : inputting a composite document [FEATURE ID: 3]

into a memory [FEATURE ID: 3]

, said composite document comprised of a first image overlaying a second image ; separating the first image from the second image ; decoding the second image ; and comparing the first image to a decoded version of the second image . 18 . The method of claim 17 , further comprising the step [FEATURE ID: 14]

1 . A secure component [FEATURE ID: 3]

- based operating process [FEATURE ID: 1]

including [TRANSITIVE ID: 4]

: ( a ) retrieving [TRANSITIVE ID: 2]

at least one component ; ( b ) retrieving a record that specifies [TRANSITIVE ID: 9]

a component assembly [FEATURE ID: 1]

; ( c ) checking [TRANSITIVE ID: 2]

said component and / or said record for validity ; ( d ) using [TRANSITIVE ID: 6]

said component to form said component assembly in accordance with said record ; and ( e ) performing [TRANSITIVE ID: 2]

a process [FEATURE ID: 1]

based at least in part on said component assembly . 2 . A process as in claim [FEATURE ID: 8]

1 wherein said step [FEATURE ID: 14]

( c ) comprises [TRANSITIVE ID: 4]

executing said component assembly . 3 . A process as in claim 1 wherein said component comprises executable code [FEATURE ID: 5]

. 4 . A process as in claim 1 wherein said component comprises a load module [FEATURE ID: 12]

. 5 . A process as in claim 1 wherein : said record comprises : ( i ) directions [FEATURE ID: 5]

for assembling said component assembly , and ( ii ) information [FEATURE ID: 5]

that at least in part specifies a control ; and said process further comprises controlling said step ( d ) and / or said step ( e ) based at least in part on said control . 6 . A process as in claim 1 wherein said component has a security wrapper [FEATURE ID: 13]

, and said controlling step comprises selectively opening said security wrapper based at least in part on sale ! control . 7 . A process as in claim 1 wherein : said permissions record [FEATURE ID: 5]

includes at least [FEATURE ID: 10]

one decryption key [FEATURE ID: 11]

; and said controlling step includes controlling use of said decryption key . 8 . A process as in claim 1 including performing at least two of said steps [FEATURE ID: 1]

( a ) and ( e ) within a protected processing environment [FEATURE ID: 3]

. 9 . A process as in claim 1 including performing at least two Or said steps ( a ) and ( e ) at least in part within tamper - resistant hardware . 10 . A method [FEATURE ID: 1]

as in claim 1 wherein said performing step ( e ) includes metering usage . 11 . A method as in claim 1 wherein said performing step ( e ) includes auditing usage . 12 . A method as in claim 1 wherein said performing step ( e ) includes budgeting usage . 13 . A secure component operating system process [FEATURE ID: 1]

including : receiving a component ; receiving directions specifying use of said component to form a component assembly ; authenticating said received component and / or said directions ; forming , using said component , said component assembly based at least in part on said received directions ; and using said component assembly to perform at least one operation [FEATURE ID: 1]

. 14 . A method comprising performing the following steps within a secure operating system environment [FEATURE ID: 3]

: providing code [FEATURE ID: 5]

; providing directions specifying assembly of said code into an executable program [FEATURE ID: 7]

; checking said received code and / or said assembly directors [FEATURE ID: 5]