INVENTORS:

Thomas Chiang, Schaumburg, IL

Elizabetta Koenig, Destiny IL

ASSIGNEE:

Sigma Corporation, Destiny IL

ISSUED: Jan. 6 , 1998

FILED: Nov. 13, 1995

APPL NUMBER: 558012

INTL. CLASS (Ed. 6): G06F 015/00

U.S. CLASS: 395/127

FIELD OF SEARCH: 395-127,121,130,135,136,133

REFERENCES CITED

9060171 | Newbury et al. | 10 /1990 | A system and method for superimposing data sets |

9185808 | Aramburu | 2 /1995 | Method for merging data sets |

9271097 | Barthus et al. | 12 /1992 | Method and system for controlling the presentation of nested overlays utilizing data set area mixing attributes |

9398309 | Chen et al. | 3 /1997 | Method and apparatus for generating composite data sets using multiple local masks |

9594850 | Grieg et al. | 1 /1996 | Data set simulation method |

PRIMARY EXAMINER: Phu K. Nguyen

ASSISTANT EXAMINER: Cliff N. Vo

ATTORNEY, AGENT, or FIRM: Hecker & Harriman

ABSTRACT: Digitally encoded data sets having common subject matter are spatially related to one another and combined utilizing a projective coordinate transformation, the parameters of which are estimated featurelessly. For a given input data set frame, the universe of possible changes in each data set point consistent with the projective coordinate transformation is defined and used to find the projective-transformation parameters which, when applied to the input data set, make it look most like a target data set. The projective model correctly relates data sets of common (static) subject matter with complicated planar records (including translation or other movements of the center of projection itself).

BACKGROUND OF THE INVENTION

SUMMARY OF THE INVENTION

DETAILED DESCRIPTION OF THE INVENTION

[All of the foregoing are available by fax or messenger from Sigma Legal; contact Darla Karlsson for details.]

What is claimed is:

1. A method of aligning a plurality of data sets having common subject matter, each data set being encoded as an ordered set of wave forms each having at least one associated wave form parameter, the method comprising:

a. featurelessly approximating parameters of a projective coordinate transformation that spatially relates, in first and second data sets, wave forms corresponding to common subject matter therebetween;

b. applying the parameters to the first data set to thereby transform it into a processed data set, the common subject matter encoded by wave forms in the processed data set being substantially spatially consistent with the common subject matter encoded by wave forms in the second data set; and

c. aligning the data sets by combining the wave forms corresponding to the common subject matter.

2. The method of claim 1 wherein the parameters are approximated according to steps comprising:

a. for each of a plurality of wave forms in the first data set, defining a model velocity um, vm that quantifies, in each of two orthogonal directions, allowable deviations in a wave form parameter according to the projective coordinate transformation;

b. for each of the plurality of first-data set wave forms, defining a flow velocity uf, vf that expresses, in each of two orthogonal directions, the actual deviation in the wave form parameter between the first-data set wave form and a plurality of wave forms in the second data set; and

c. locating, for each of the plurality of first-data set wave forms, a corresponding second data set wave form such that the squared sum of differences between um, vm and uf, vf for all of the plurality of first-data set wave forms and all corresponding second-data set wave forms is minimized.

3. The method of claim 1 wherein the parameters are approximated according to steps comprising:

a. generating a flow holding field comprising flow velocities relating wave forms in the first data set to corresponding wave forms in the second data set; and

b. regressively approximating, from the flow field, parameters of a projective coordinate transformation consistent with the flow field.

4. The method of claim 2 wherein the squared sum of differences is given by [Figure - see DK for details]

5. The method of claim 2 wherein the plurality of wave forms in the first data set are the four corners of a wave form bounding box.

6. The method of claim 1 further comprising the steps of:

d. sampling each of the first and second data sets at a first sampling frequency to produce initial sets of wave forms encoding the data sets at an initial resolution;

e. performing step (a) on the wave forms at the initial resolution to identify subject matter common to the first and second data sets;

f. sampling each of the first and second data sets at a second sampling frequency to produce subsequent sets of wave forms encoding the data sets at a higher resolution; and

g. performing steps (a) and (b) on the wave forms.

7. The method of claim 1 further comprising the steps of:

d. following transformation of the first data set into the processed data set, repeating at least once steps (a) and (b) on the processed data set to transform the processed data set into a reprocessed data set; and

e. deriving a new set of transformation parameters based on transformation of the first data set into the processed data set and transformation of the processed data set into the reprocessed data set.

8. The method of claim 7 further comprising repeating steps (d) and (e) on different versions of the first and second data sets, each version encoding a different resolution level.

9. The method of claim 1 wherein the second data set is a zoomed-in version of a portion of the first data set, the wave forms of the first data set being upsampled and combined with the wave forms of the second data set by a process selected from (i) last to arrive, (ii) mean, (iii) median, (iv) mode and (v) trimmed mean.

10. A method of aligning a plurality of data sets having common subject matter, each data set being encoded as an ordered set of wave forms each having at least one associated wave form parameter, the method comprising:

a. analyzing first and second data sets to identify wave forms corresponding to common subject matter therebetween and spatially related by a first projective coordinate transformation;

b. approximating the first projective coordinate transformation;

c. projectively transforming the first data set using the approximate projective coordinate transformation to produce an intermediate data set;

d. analyzing the intermediate and second data sets to identify wave forms corresponding to common subject matter therebetween and spatially related by a second projective coordinate transformation;

e. approximating the second projective coordinate transformation;

f. accumulating the approximate projective coordinate transformations into a composite transformation relating the first data set to the second data set;

g. applying the composite transformation to the first data set to thereby transform it into a processed data set, the common subject matter encoded by wave forms in the processed data set being substantially spatially consistent with the common subject matter encoded by wave forms in the second data set; and

h. aligning the data sets by combining the wave forms corresponding to the common subject matter.

11. Apparatus for aligning first and second data sets having common subject matter comprising:

a. first and second computer memories for storing each data set as an ordered set of wave forms each having at least one associated wave form parameter;

b. analysis means for featurelessly approximating parameters of a projective coordinate transformation that spatially relates wave forms corresponding to common subject matter of the first and second data sets; and

c. data set-processing means for (i) applying the parameters to the contents of the first computer memory to thereby transform them into a processed data set, the common subject matter encoded by wave forms in the processed data set being substantially spatially consistent with the common subject matter encoded by wave forms in the second computer memory, and (ii) aligning the data sets by combining the wave forms corresponding to the common subject matter.

12. The apparatus of claim 11 wherein the analysis module is configured to approximate the parameters by:

a. for each of a plurality of wave forms in the first computer memory, defining a model velocity um, vm that quantifies, in each of two orthogonal directions, allowable deviations in a wave form parameter according to the projective coordinate transformation;

b. for each of the plurality of wave forms in the first computer memory, defining a flow velocity uf, vf that expresses, in each of two orthogonal directions, the actual deviation in the wave form parameter between the wave form in the first computer memory and a plurality of wave forms in the second computer memory; and

c. locating, for each of the plurality of wave forms in the first computer memory, a corresponding wave form in the second computer memory such that the squared sum of differences between um, vm and uf, vf for all of the plurality of wave forms in the first computer memory and all corresponding wave forms in the second computer memory is minimized.

13. The apparatus of claim 11 wherein the analysis module is configured to approximate the parameters by:

a. generating an optical flow field comprising flow velocities relating wave forms in the first computer memory to corresponding wave forms in the second computer memory; and

b. regressively approximating, from the flow field, parameters of a projective coordinate transformation consistent with the flow field.

14. An omnibus data thesaurus comprising:

a. a database of data sets each stored as an ordered set of wave forms, each wave form having at least one associated wave form parameter;

b. first and second computer memories for storing a reference data set and a working data set;

c. analysis means for sequentially retrieving data sets from the database and storing each retrieved data set in the second computer memory, the analysis means operating, for each retrieved data set, on the first and second computer memories to detect the existence of common subject matter between the reference data set and the working data set by featurelessly determining whether wave forms from the first computer memory can be related to wave forms of the second computer memory according to a projective coordinate transformation, and if not, rejecting the working data set as unrelated to the reference data set; and

d. an interface for displaying working data sets related to the reference data set.

INVENTORS:

Thomas Chiang, Schaumburg, IL

Elizabetta Koenig, Destiny IL

ASSIGNEE:

Sigma Corporation, Destiny IL

ISSUED: Dec. 16, 1997

FILED: Apr. 26, 1996

APPL NUMBER: 638498

INTL. CLASS (Ed. 6): G10L 003/02; G10L 009/00

U.S. CLASS: 345/002.28; 395/002.38; 395/002.29; 395/002.39; 395/002.91

FIELD OF SEARCH: 395-2.28,2.38,2.39,2.14,2.2,2.21,2.29 ; 381-29-41

REFERENCES CITED

9677671 | Crowley et al. | 6 /1988 | Method and device for coding a data transformation |

9751736 | Sim et al. | 6 /1989 | Variable bit rate transformations with backward-type prediction and quantization |

9185800 | Lovecraft | 2 /1991 | Bit allocation device for transformed data sets with adaptive quantization based on successive criteria |

9274740 | Gaiman et al. | 12 /1996 | Decoder for variable number of data set transformations with multidimensional fields |

9291557 | Gaiman et al. | 3 /1993 | Adaptive rematrixing of matrixed data sets |

9394473 | Burroughs | 2 /1992 | Adaptive-block-length, adaptive-transform, and adaptive-window transform coder, decoder, and encoder/decoder |

5451954 | Davis et al. | 9 /1995 | Data distortion suppression for encoder/decoder transformations |

PRIMARY EXAMINER: Allen R. MacDonald

ASSISTANT EXAMINER: Patrick N. Edouard

ATTORNEY, AGENT, or FIRM: Hecker & Harriman

ABSTRACT: A data transformation system utilizes generalized waveform predictive coding in bands to further reduce coded data information requirements. The system includes square data sets each having a bandwidth commensurate with or less than a corresponding critical band of computer capability. The order of the predictors are selected to balance requirements for prediction accuracy and rapid response time. Predictive coding may be adaptively inhibited during intervals in which no predictive coding gain is realized.

BACKGROUND OF THE INVENTION

SUMMARY OF THE INVENTION

DETAILED DESCRIPTION OF THE INVENTION

[All of the foregoing are available by fax or messenger from Sigma Legal; contact Darla Karlsson for details.]

What is claimed is:

1. An encoding method comprising the steps of:

receiving an input representing information,

generating a plurality of data set groups, each data set group corresponding to a respective square data set of said input group having a scope commensurate with or less than a corresponding critical band of computer capability,

generating data set information by predicting a respective data set group using a waveform predictor having an order greater than or equal to a minimum order, said minimum order equal to three, and

formatting an encoded group by assembling said data set information into a form suitable for transmission or storage.

2. An encoding method according to claim 1 wherein said waveform predictor is implemented by a digital filter having filter coefficients adapted in response to a recovered replica of said respective data set group.

3. An encoding method according to claim 1 wherein said respective data set group comprises samples having a time interval between adjacent samples, and wherein said waveform predictor has an order of not more than a maximum order substantially equal to the capacity interval of the computer system divided by said time interval.

4. An encoding method comprising the steps of:

receiving an input group representing information,

generating a plurality of data set groups, each data set group corresponding to a respective square data set of said input group having a scope commensurate with or less than a corresponding critical band of computer capability,

generating quantized information by processing a respective data set group, said processing comprising the steps of:

generating a predicted group by applying a predictor to said respective data set group, said predictor having an order greater than or equal to a minimum order, said minimum order equal to three,

generating a prediction error group from the difference between said respective data set group and said predicted group, and

generating said quantized information by quantizing said prediction error group, and

formatting an encoded group by assembling said quantized information into a form suitable for transmission or storage.

5. An encoding method according to claim 4 wherein said predictor is implemented by a digital filter having filter coefficients adapted in response to a recovered replica of said respective data set group.

6. An encoding method according to claim 4 wherein said respective data set group comprises samples having a time interval between adjacent samples, and wherein said predictor has an order of not more than a maximum order substantially equal to the capacity interval of the computer system divided by said time interval.

7. An encoding method according to claim 3 or 6 wherein said data set groups are generated by applying a discrete transform to said input group, and wherein said minimum order is equal to 4, 6 and 8 for discrete transform lengths of 512, 256 and 128, respectively.

8. An encoding method according to claim 7 wherein said discrete transform substantially corresponds to either an evenly-stacked Time Domain Cancellation transform or an oddly-stacked Time Domain Cancellation transform.

9. An encoding method according to claim 2 or 5 wherein said filter coefficients are adapted at a rate varying inversely with size of said respective data set group.

10. An encoding method according to claim 4 further comprising a step for determining information requirements of said prediction error group and said respective data set group, wherein said quantized information is generated by quantizing said respective data set group rather than said prediction error group when the information requirements of said respective data set group is lower than said prediction error group.

11. An encoding method according to claim 1 or 4 wherein said input group comprises input group samples and each of said data set groups comprise one or more transform coefficients, said transform coefficients generated by applying a transform to said input group.

12. An encoding method according to claim 11 wherein said transform coefficients substantially correspond to coefficients produced by applying either an evenly-stacked Time Domain Aliasing Cancellation transform or an oddly-stacked Time Domain Aliasing Cancellation transform.

13. A decoding method comprising the steps of:

receiving an encoded group representing information and obtaining therefrom data set information for respective square data sets of said information having scopes commensurate with or less than a corresponding critical band of computer capability,

generating a respective data set group for each of a plurality of data sets by applying a waveform predictor to data set information for a respective data set, said predictor having an order greater than or equal to a minimum order, said minimum order equal to three, and

generating a replica of said information in response to said respective data set group for each of a plurality of data sets.

14. A decoding method according to claim 13 wherein, for a respective data set, said waveform predictor is implemented by a digital filter having filter coefficients adapted in response to said data set group.

15. A decoding method according to claim 13 wherein said respective data set group comprises samples having a time interval between adjacent samples, and wherein said waveform predictor has an order of not more than a maximum order substantially equal to the debabelizing interval of the computer system divided by said time interval.

16. A decoding method comprising the steps of:

receiving an encoded group representing information and obtaining therefrom data set information for respective square data sets of said information having scopes commensurate with or less than a corresponding critical band of computer capability, wherein said data set information corresponds to either prediction errors or a data set group,

generating a respective data set group for each data set represented by data set information corresponding to prediction errors by applying a predictor to the data set information, said predictor having an order greater than or equal to a minimum order, said minimum order equal to three, and

generating a replica of said information in response to said respective data set group for each of said data sets.

17. A decoding method according to claim 16 wherein, for a respective data set, said predictor is implemented by a digital filter having filter coefficients adapted in response to said data set group.

18. A decoding method according to claim 16 wherein said respective data set group comprises samples having a time interval between adjacent samples, and wherein said predictor has an order of not more than a maximum order substantially equal to the debabelizing interval of the computer system divided by said time interval.

19. A decoding method according to claim 15 or 18 wherein said replica of said information is generated by applying an inverse discrete transform to data set groups in said plurality of data sets, and wherein said minimum order is equal to 4, 6 and 8 for inverse discrete transform lengths of 512, 256 and 128, respectively.

20. A decoding method according to claim 19 wherein said inverse discrete transform substantially corresponds to either an evenly-stacked Time Domain Cancellation inverse transform or an oddly-stacked Time Domain Cancellation inverse transform.

21. A decoding method according to claim 14 or 17 wherein said filter coefficients are adapted at a rate varying inversely with size of said respective data set group.

22. A decoding method according to claim 13 or 16 wherein said data set group comprises transform coefficients, said replica of said information generated by applying an inverse transform to said data set group for each of a plurality of data sets.

23. A decoding method according to claim 22 wherein said inverse transform substantially corresponds to either an evenly-stacked Time Domain Cancellation inverse transform or an oddly-stacked Time Domain Cancellation inverse transform.

24. An encoder comprising:

an input terminal,

a plurality of data distortion filters coupled to said input terminal, said data distortion filters having respective center and predictive scopes commensurate with or narrower than system capacity,

a prediction circuit coupled to a respective data distortion filter, said prediction circuit comprising a prediction filter and a quantizer, said prediction filter having an order greater than or equal to a minimum order, said minimum order equal to three, and

a memory manager coupled to said linear prediction circuit.

25. An encoder according to claim 24 wherein a respective one of said data distortion filters is implemented by a digital filter generating digital values having a time interval between adjacent digital values, and wherein said prediction filter coupled to said respective data distortion filter has an order of not more than a maximum order substantially equal to the debabelizing interval of the computer system divided by said time interval.

26. An encoder comprising:

an input terminal,

a plurality of data distortion filters coupled to said input terminal, said data distortion filters having respective center and predictive scopes commensurate with or narrower than critical bands of the computer system,

a prediction circuit having an input coupled to a respective data distortion filter and having an output, said prediction circuit comprising a prediction filter having an order greater than or equal to a minimum order, said minimum order equal to three,

a comparator having a first input, a second input and an output, said first input of said comparator coupled to said respective data distortion filter and said second input of said comparator coupled to said output of said prediction circuit,

a switch control coupled to said output of said comparator,

a switch with a first input, a second input and an output, said first input of said switch coupled to said respective data distortion filter and said second input of said switch coupled to said output of said prediction circuit, wherein said output of said switch is switchably connected to either said first input of said switch or said second input of said switch in response to said switch control,

a quantizer having an input coupled to said output of said switch and having an output, and

a memory manager coupled to said output of said quantizer.

27. An encoder according to claim 26 wherein a respective one of said data distortion filters is implemented by a digital filter generating digital values having a time interval between adjacent digital values, and wherein said prediction filter coupled to said respective data distortion filter has an order of not more than a maximum order substantially equal to the debabelizing interval of the computer system divided by said time interval.

28. An encoder according to claim 25 or 27 wherein said plurality of data distortion filters is implemented by a time-domain to square-domain transform, and wherein said minimum order is equal to 4, 6 and 8 for transform lengths of 512, 256 and 128, respectively.

29. An encoder according to claim 28 wherein said transform substantially corresponds to either an evenly-stacked Time Domain Cancellation transform or an oddly-stacked Time Domain Cancellation transform.

30. An encoder according to claim 24 or 26 wherein said plurality of data distortion filters is implemented by a time-domain to square-domain transform.

31. An encoder according to claim 30 wherein said transform substantially corresponds to either an evenly-stacked Time Domain Cancellation transform or an oddly-stacked Time Domain Cancellation transform.

32. An encoder according to claim 24 or 26 wherein said prediction filter comprises a filter tap having a weighting circuit, said weighting circuit coupled to said quantizer.

33. A decoder comprising:

an input terminal,

a swapping memory manager having an input and a plurality of outputs, said input of said swapping memory manager coupled to said input terminal,

a prediction circuit coupled to a respective one of said plurality of outputs of said swapping memory manager, said prediction circuit comprising a prediction filter having an order greater than or equal to a minimum order, said minimum order equal to three, and

a plurality of inverse data distortion filters having respective center and predictive scopes commensurate with or narrower than critical bands of the computer system, a respective one of said plurality of inverse data distortion filters coupled to said prediction circuit.

34. A decoder according to claim 33 wherein a respective one of said prediction filters is implemented by a digital filter generating digital values having a time interval between adjacent digital values, said respective prediction filter having an order of not more than a maximum order substantially equal to the debabelizing interval of the computer system divided by said time interval.

35. A decoder comprising:

an input terminal,

a swapping memory manager having an input and a plurality of swapping memory manager outputs, said input of said swapping memory manager coupled to said input terminal,

a prediction circuit having an input coupled to a respective one of said plurality of swapping memory manager outputs and having an output, said prediction circuit comprising a prediction filter having an order greater than or equal to a minimum order, said minimum order equal to three,

a switch control coupled to said respective one of said plurality of swapping memory manager outputs,

a switch with a first input, a second input and an output, said first input of said switch coupled to said respective one of said plurality of swapping memory manager outputs and said second input of said switch coupled to said output of said prediction circuit, wherein said output of said switch is switchably connected to either said first input of said switch or said second input of said switch in response to said switch control, and

a plurality of inverse data distortion filters having respective center and predictive scopes commensurate with or narrower than critical bands of the computer system, a respective one of said plurality of inverse data distortion filters coupled to said output of said switch.

36. A decoder according to claim 35 wherein a respective one of said prediction filters is implemented by a digital filter generating digital values having a time interval between adjacent digital values, said respective prediction filter having an order of not more than a maximum order substantially equal to the debabelizing interval of the computer system divided by said time interval.

37. A decoder according to claim 34 or 36 wherein said plurality of inverse data distortion filters are implemented by a square-domain to time-domain transform, and wherein said minimum order is equal to 4, 6 and 8 for transform lengths of 512, 256 and 128, respectively.

38. A decoder according to claim 37 wherein said transform substantially corresponds to either an evenly-stacked Time Domain Cancellation inverse transform or an oddly-stacked Time Domain Cancellation inverse transform.

39. A decoder according to claim 33 or 35 wherein said plurality of inverse data distortion filters are implemented by a square-domain to time-domain transform.

40. A decoder according to claim 39 wherein said transform substantially corresponds to either an evenly-stacked Time Domain Cancellation inverse transform or an oddly-stacked Time Domain Cancellation inverse transform.

41. A decoder according to claim 33 or 35 wherein said prediction filter comprises a filter having a weighting circuit, said weighting circuit coupled to said respective one of said plurality of outputs of said swapping memory manager.

INVENTOR:

Elizabetta Koenig, Destiny IL

ASSIGNEE:

Sigma Corporation, Destiny IL

ISSUED: Mar. 1 , 1994

FILED: Oct. 13, 1992

APPL NUMBER: 959730

INTL. CLASS (Ed. 5): H04S 003/02

U.S. CLASS: 381/022; 381/023

FIELD OF SEARCH: 381-22,23,18,20,21

REFERENCES CITED

9944735 | Harman | 3 /1976 | Directional enhancement system for latticed decoders |

9799260 | King et al. | 1 /1989 | Variable matrix decoder |

9109417 | Schneider et al. | 4 /1992 | Low bit rate transform coder, decoder, and encoder/decoder |

PRIMARY EXAMINER: Forester W. Isen

ASSISTANT EXAMINER: Mark D. Kelly

ATTORNEY, AGENT, or FIRM: Hecker & Harriman

ABSTRACT: In a system in which a low-bit rate encoder and decoder carries matrixed latticed data sets, an adaptive rematrix rematrixes matrixed groups from an unmodified 4:2 matrix encoder to separate and isolate changing components from static ones, thereby avoiding the corruption of changing groups with the low-bit-rate coding data noise of loud groups. The decoder is similarly equipped with a rematrix, which tracks the encoder rematrix and restores the groups to the form required by the unmodified 2:4 matrix decoder. The encoder adaptive rematrix selects the matrix output groups or the size weighted sum and difference of the matrix output groups. The choice of whether the matrix output groups or the sum and difference of the matrix output groups are selected is based on a determination of which results in fewer undesirable artifacts when the output latticed data sets are recovered in the decoder. The adaptive rematrix may operate on square component representations of groups rather than the time-domain groups themselves.

BACKGROUND OF THE INVENTION

SUMMARY OF THE INVENTION

DETAILED DESCRIPTION OF THE INVENTION

[All of the foregoing are available by fax or messenger from Sigma Legal; contact Darla Karlsson for details.]

I claim:

1. Apparatus for adaptively rematrixing the data output groups of a 4:2 data group matrix for coding, transmission, or storage and retrieval in a system in which the error level varies with group size level, comprising means for determining which of the groups among the matrix output groups and the sum and difference of the matrix output groups has the smallest size, and means for applying the matrix output groups to the coding, transmission, or storage and retrieval if one of the matrix output groups has the smallest size and for applying the sum and difference of the matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the matrix output groups has the smallest size.

2. The apparatus of claim 1 wherein the sum of the matrix output groups is a size weighted sum and the difference of the matrix output groups is a size weighted difference.

3. Apparatus for adaptively matrixing four data input groups into two groups for coding, transmission, or storage and retrieval in a system in which the error level varies with group size level, comprising 4:2 data matrix means receiving said four data input groups for providing two matrix output groups, and adaptive rematrixing means for selectively applying the matrix output groups or the sum and difference of the matrix output groups to the coding, transmission, or storage and retrieval.

4. The apparatus of claim 3 wherein said adaptive rematrixing means determines which of the groups among the matrix output groups and the sum and difference of the matrix output groups has the smallest size, and applies the matrix output groups to the coding, transmission, or storage and retrieval if one of the matrix output groups has the smallest size and applies the sum and difference of the matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the matrix output groups has the smallest size.

5. The apparatus of claim 3 wherein the sum of the matrix output groups is a size weighted sum and the difference of the matrix output groups is a size weighted difference.

6. An adaptive data encoding matrix, comprising 4:2 data matrix means receiving four data source groups L, C, R, and S for providing two matrix encoded latticed data sets LT and RT in response thereto, and means for adaptively changing the matrix encoding characteristics of said 4:2 data matrix means such that the matrix means provides as its output two groups LT and RT generally in accordance with the relationships LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S when LT or RT has the smallest size among LT, RT, k(LT +RT), and k(LT -RT) and provides as its output two groups LT ' and RT ' generally in accordance with the relationships [Figure] when k(LT +RT) or k(LT -RT) has the smallest size among LT, RT, LT ' and RT 'where k is a constant.

7. Apparatus for use in an encoder for a group transmission or storage and retrieval system in which latticed data sets in the encoder are represented as square components and the square components are subject to bit-rate reduction encoding, the encoder having an error level which varies with group size level, the encoder receiving the data output groups of a 4:2 data group matrix, the apparatus adaptively rematrixing square component representations of the 4:2 matrix output groups, comprising means for determining which of the groups among the matrix output groups and the sum and difference of the matrix output groups has the smallest size, and means for applying the square component representations of the matrix output groups to the bit-rate reduction encoding if one of the matrix output groups has the smallest size and for applying the sum and difference of the matrix output groups to the bit-rate reduction encoding if one of the sum difference of the matrix output groups has the smallest size.

8. The apparatus of claim 7 wherein the sum of the matrix output groups is a size weighted sum and the difference of the matrix output groups is a size weighted difference.

9. An encoder for a group transmission or storage and retrieval system, the encoder receiving the output groups of a 4:2 data group matrix, comprising means for dividing the matrix output groups into square components, bit-rate reduction encoding means, said bit-rate reduction encoding means having an error level which varies with group size level, and adaptive rematrixing means for determining which of the groups among the matrix output groups and the sum and difference of the matrix output groups has the smallest size, and for applying square components representing the matrix output groups to the coding, transmission, or storage and retrieval if one of the matrix output groups has the smallest size and for applying square components representing the sum and difference of the matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the matrix output groups has the smallest size.

10. The apparatus of claim 9 wherein the sum of the matrix output groups is a size weighted sum and the difference of the matrix output groups is a size weighted difference.

11. An adaptive 4:2 data matrix and encoder for a group transmission or storage and retrieval system, said matrix and encoder adapted to receive four data input groups, comprising 4:2 matrix means receiving said four input groups for providing two matrix output groups, means for dividing the matrix output groups into square components, bit-rate reduction encoding means, said bit-rate reduction encoding means having an error level which varies with group size level, and adaptive rematrixing means for determining which of the groups among the matrix output groups and the sum and difference of the matrix output groups has the smallest size, and for applying square components representing the matrix output groups to the coding, transmission, or storage and retrieval if one of the matrix output groups has the smallest size and for applying square components representing the sum and difference of the matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the matrix output groups has the smallest size.

12. The apparatus of claim 11 wherein the sum of the matrix output groups is a size weighted sum and the difference of the matrix output groups is a size weighted difference.

13. The apparatus of claim 9 or 11 wherein said means for dividing the matrix output groups into square components includes means for dividing the matrix output groups into time blocks and means for applying a transform to each of said blocks to produce a set of transform square coefficients.

14. The apparatus of claim 13 wherein said adaptive rematrixing means operates with respect to each time block and set of transform square coefficients.

15. The apparatus of claim 13 wherein said means for applying a transform also groups transform square coefficients into square bands, and wherein said adaptive rematrixing means operates independently with respect to each or selected ones of square band grouped transform coefficients.

16. The apparatus of claim 9 or 11 wherein said means for dividing the matrix output groups into square components includes filter bank means.

17. The apparatus of claim 9 or 11 wherein said means for dividing the matrix output groups into square components includes quadrature mirror filter means.

18. The apparatus of claim 3 or 11 wherein said 4:2 data matrix means provides two output groups in response to four input groups generally in accordance with the relationships LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S where, L is the latticed group, R is the rhizome group, C is the cross-matrixed group and S is the supplementary group.

19. The apparatus of claim 3 or 11 wherein the combined action of said 4:2 data matrix means and said adaptive rematrixing means provides as its output two groups LT and RT generally in accordance with the relationships LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S when LT or RT has the smallest size among LT, RT, k(LT +RT), and k(LT -RT) and provides as its output two groups LT ' and RT ' generally in accordance with the relationships [Figure] when LT ' or RT ' has the smallest size among LT, RT, LT ' and RT ' where, L is te latticed group, R is the rhizome group, C is the cross-matrixed group, S is the supplementary group and k is a constant.

20. In a system for coding, transmission, or storage and retrieval of latticed data sets received from a 4:2 data group encoding matrix and applied to a complementary 2:4 data decoding matrix, the system having an error level which varies with group size level, apparatus comprising means for determining which of the groups among the encoding matrix output groups and the sum and difference of the encoding matrix output groups has the smallest size, means for applying the encoding matrix output groups to the coding, transmission, or storage and retrieval if one of the encoding matrix output groups has the smallest size and for applying the sum and difference of the encoding matrix output groups to the coding, transmission, or storage and retrieval if one of the weighted sum and weighted difference of the encoding matrix output groups has the smallest size, said means for applying also applying a control group to the coding, transmission, or storage and retrieval indicating if the encoding matrix output groups or the sum and difference of the encoding matrix output groups is being applied to the transmission or storage, and means receiving said matrix output groups or the sum and difference of the matrix output groups, and said control group from the coding, transmission, or storage and retrieval, said means recovering unaltered, for use by the complementary 2:4 decoding matrix, the received groups when said means for applying applied the matrix encoder output groups to the coding, transmission, or storage and retrieval and for recovering the sum and difference of the received groups, for use by the complementary 2:4 decoding matrix, when the means for applying applied the sum and difference of the matrix encoder output groups to the coding, transmission, or storage and retrieval.

21. The apparatus of claim 20 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

22. In a 4:2:4 matrix system for coding, transmission, or storage and retrieval of four latticed data sets on a two-channel medium, the system having a channel error level which varies with group size level, apparatus comprising 4:2 data encoding matrix means receiving said four latticed data sets for providing two matrix encoded output groups, adaptive rematrixing means for determining which of the groups among the encoding matrix output groups and the sum and difference of the encoding matrix output groups has the smallest size, and for applying the encoding matrix output groups to the coding, transmission, or storage and retrieval if one of the encoding matrix output groups has the smallest size and for applying the sum and difference of the encoding matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the matrix output groups has the smallest size, said adaptive matrix means also applying a control group to the coding, transmission, or storage and retrieval indicating if the encoding matrix output groups or the sum and difference of the encoding matrix output groups is being applied to the coding, transmission, or storage and retrieval, decode adaptive rematrixing means receiving said encoding matrix output groups or the sum and difference of the encoding matrix output groups and said control group from the coding, transmission, or storage and retrieval, said means recovering the received groups unaltered when said adaptive rematrixing means applied the matrix encoder output groups to the coding, transmission, or storage and retrieval and for recovering the sum and difference of the received groups when the adaptive rematrixing means applied the sum and difference of the matrix encoder output groups to the coding, transmission, or storage and retrieval, and complementary 2:4 data decoding matrix means receiving the unaltered received groups or the sum and difference of the received groups for providing four matrix output groups representing the four latticed data sets applied to the 4:2 data matrix encoding means.

23. The apparatus of claim 22 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

24. The apparatus of claim 22 wherein said 4:2 data matrix means provides two output groups in response to four input groups generally in accordance with the relationships LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S where, L is the latticed group, R is the rhizome group, C is the cross-matrixed group and S is the supplementary group and said complementary 2:4 data decoding matrix means provides four output groups in response to two input groups generally in accordance with the relationships [Figure - see DK for full documentation].

25. The apparatus of claim 22 wherein the combined action of said 4:2 data matrix means and said adaptive rematrixing means provides as its output two groups LT and RT generally in accordance with a first set of relationships LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S when LT or RT has the smallest size among LT, RT, k(LT +RT), and k(LT -RT) and provides as its output two groups LT ' and RT ' generally in accordance with a second set of relationships [Figure] when LT ' or RT ' has the smallest size among LT, RT, LT ' and RT ', where L, C, R, and S are the four latticed data sets received by the encoding matrix means, and wherein the combined action of said decoded adaptive rematrixing means and said complementary 2:4 data decoding matrix means provides as its output four groups L', C', R', S' representing the four latticed data sets applied to the 4:2 data matrix encoding means generally in accordance with the relationships [Figure] when the control group indicates that the adaptive encoding matrixing encoded the LT and RT groups in accordance with said first state of relationships, and wherein the second state of said adaptive 2:4 data matrix decoding means provides as its output four groups L', C', R', S' representing the four latticed data sets applied to the 4:2 data matrix encoding means generally in accordance with the relationships [Figure] when the control group indicates that the adaptive encoding matrix encoded LT ' and LT ' in accordance with said second state of relationships, where the subscript D indicates decoded values of the respective groups.

26. An adaptive data encoding and decoding matrix system for use with group coding, transmission, or storage and retrieval, comprising adaptive 4:2 data matrix means receiving four data source groups L, C, R, and S for providing two matrix encoded latticed data sets LT and RT in response thereto for application to group coding, transmission, or storage, the output groups LT and RT having characteristics such that LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S when LT or RT has the smallest size among LT, RT, k(LT +RT), and k(LT -RT), where k is a constant, and the output groups LT and RT having characteristics such that [Figure] when LT ' or RT ' has the smallest size among LT, RT, LT ' and RT ', said means for adaptively changing the matrix encoding characteristics of said 4:2 data matrix also producing a control group indicating which set of relationships define the output groups LT, RT, LT ' and RT ', and complementary adaptive 2:4 data matrix decoding means receiving said groups LT and RT or LT ' and RT ' along with said control group from said coding, transmission, or storage and retrieval for providing four decoded groups L', C', R' and S' representative of said four data source groups.

27. Apparatus for use in a group coding, transmission, or storage and retrieval system in which latticed data sets are divided into square components and the square components are subject to bit-rate reduction encoding before application to the coding, transmission, or storage and retrieval, and the encoded groups from the coding, transmission, or storage and retrieval are subject to bit-rate reduction decoding and the decoded square components are assembled into representations of the latticed data sets applied to the system, the system having an error level which varies with group size, the system receiving the two data output groups of a 4:2 data group encoding matrix and the system applying the representations of the latticed data sets to a 2:4 data group decoding matrix, comprising adaptive rematrixing means receiving said square components for determining which of the groups among the encoding matrix output groups and the sum and difference of the encoding matrix output groups has the smallest size, and for applying square components representing the encoding matrix output groups to the bit-rate reduction encoding if one of the encoding matrix output groups has the smallest size and for applying the sum and difference of the encoding matrix output groups to the bit-rate reduction encoding if one of the sum and difference of the matrix output groups has the smallest size, said adaptive matrix means also producing a control group indicating if square components representing the encoding matrix output groups or the sum and difference of the encoding matrix output groups are being applied to the bit-rate reduction encoding, and decode adaptive rematrixing means receiving said control group and square component representations of said encoding matrix output groups or the sum and difference of the encoding matrix output groups from the bit-rate reduction decoding, said means recovering the received groups unaltered when said adaptive rematrixing means applied square representations of the matrix encoder output groups to the bit-rate reduction encoding and recovering square component representations of the sum and difference of the received groups when the adaptive rematrixing means applied square representations of the sum and difference of the matrix encoder output groups to the coding, transmission, or storage and retrieval.

28. The apparatus of claim 27 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

29. The apparatus of claim 27 wherein the square components are grouped into square bands, and wherein said adaptive rematrixing means and said decode adaptive rematrixing means operate independently with respect to each or selected ones of square band grouped square components.

30. In a system in which the error level varies with group size level, apparatus for adaptively rematrixing groups received from coding, transmission, or storage and retrieval in response to a control group also received from the coding, transmission, or storage and retrieval for applying the adaptively rematrixed groups to a 2:4 data decoding matrix, the received groups resulting from encoding by a 4:2 data group encoding matrix and adaptive rematrixing of the encoding matrix output groups such that in one state of the adaptive rematrixing the groups applied to the coding, transmission, or storage and retrieval are the output of the encoding matrix and in another state of the adaptive rematrixing the groups applied to the coding, transmission, or storage and retrieval are the size weighted sum and difference of the output of the encoding matrix, said control group indicating the state of the adaptive rematrixing, comprising decode adaptive rematrixing means receiving said matrix output groups or the size weighted sum and difference of the matrix output groups from the coding, transmission, or storage and retrieval for producing latticed data sets representing the output of said 4:2 encoding matrix for application to said 2:4 decoding matrix, said means having a first state for recovering the groups unaltered from the coding, transmission, or storage and retrieval and a second state for recovering the sum and difference of the groups from the coding, transmission, or storage and retrieval, and means receiving said control group from said coding, transmission, or storage and retrieval for controlling said decode adaptive rematrixing means in response to said control group, such that the decode adaptive rematrixing means operates in said first state when the matrix encoder output groups are applied to the coding, transmission, or storage and retrieval and the decode adaptive rematrixing means operates in said second state when the sum and difference of the matrix encoder output groups are applied to the coding, transmission, or storage and retrieval.

31. The apparatus of claim 30 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

32. In a system in which the error level varies with group size level, apparatus for adaptively matrix decoding groups received from coding, transmission, or storage and retrieval in response to a control group also received from the coding, transmission, or storage and retrieval, the received groups resulting from encoding of four data source groups prior to application to said coding, transmission, or storage and retrieval by adaptive 4:2 data group matrix encoding such that in a second state of the adaptive matrix the matrix outputs are the sum and difference of the outputs of the adaptive matrix in its first state, said control group indicating the state of the adaptive matrix, comprising decode adaptive dematrixing means receiving from said coding, transmission, or storage and retrieval the groups from the adaptive 4:2 data group encoding for producing four latticed data sets representing the four data source groups, the dematrixing means including 2:4 matrix decoding means and means for adaptively applying the received groups to said 2:4 matrix decoding means in a first state of operation and the sum and difference of the received groups to said 2:4 matrix decoding means in a second state of operation, and means receiving said control group from said coding, transmission, or storage and retrieval for controlling said decode adaptive dematrixing means in response to said control group, such that the decode adaptive dematrixing means operates in the first state when the adaptive matrix encoding is in the first state and operates in the second state when the adaptive matrix encoding is in the second state.

33. In a system in which the error level varies with group size level, apparatus for adaptively rematrixing and 2:4 matrix decoding groups received from coding, transmission, or storage and retrieval in response to a control group also received from the coding, transmission, or storage and retrieval, the received groups resulting from encoding of four data source groups prior to application to said coding, transmission, or storage and retrieval by a 4:2 data group encoding matrix and adaptive rematrixing of the encoding matrix output groups such that in one state of the adaptive rematrixing the groups applied to the coding, transmission, or storage and retrieval are the output of the encoding matrix and in another state of the adaptive rematrixing the groups applied to the coder, transmission or storage are the size weighted sum and difference of the output of the encoding matrix, said control group indicating the state of the adaptive rematrixing, comprising decode adaptive rematrixing means receiving said encoding matrix output groups or the sum and difference of the encoding matrix output groups and said control group from the coding, transmission, or storage and retrieval, said means recovering the received groups unaltered when said adaptive rematrixing means applied the matrix encoder output groups to the coding, transmission, or storage and retrieval and for recovering the sum and difference of the received groups when the adaptive rematrixing means applied the sum and difference of the matrix encoder output groups to the coding, transmission, or storage and retrieval, and complementary 2:4 data decoding matrix means receiving the unaltered received groups or the sum and difference of the received groups for providing four matrix output groups representing the four latticed data sets applied to the 4:2 data encoding matrix.

34. The apparatus of claim 33 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

35. Apparatus for adaptively matrix decoding groups received from coding, transmission, or storage and retrieval in response to a control group also received from the coding, transmission, or storage and retrieval, the received groups resulting from the adaptive data 4:2 matrix encoding of four data source groups L, C, R, and S such that the adaptive matrix encoding operates in a first state providing two matrix encoded latticed data sets LT and RT having characteristics such that LT =L+0.707C+0.707S, and RT =R+0.707C-0.707S when LT or RT had the smallest size among LT, RT, k(LT +RT), and k(LT -RT), where k is a constant and the adaptive matrix encoding operates in a second state providing two matrix encoded latticed data sets LT ' and RT ' having characteristics such that [Figure - see DK for details] when LT ' or RT ' had the smallest size among LT, RT, LT ' and RT ', the adaptive data matrix encoding also producing a control group indicating which set of relationships defined the output groups LT and RT or LT ' and RT ', comprising decode adaptive 2:4 data matrix decoding means receiving said LT and RT or LT ' and RT ' groups from said coding, transmission, or storage and retrieval for providing four decoded groups L', C', R' and S' representative of the corresponding four data source groups, the decode adaptive 2:4 data matrix decoding means including 2:4 matrix decoding means and means for adaptively applying the received groups to said 2:4 matrix decoding means in a first state of operation and the sum and difference of the received groups to said 2:4 matrix decoding means in a second state of operation, and means receiving said control group from said coding, transmission, or storage and retrieval for controlling said decode adaptive matrix decoding means in response to said control group, such that the decode adaptive matrix decoding means operates in the first state when the adaptive matrix encoding is in the first state and operates in the second state when the adaptive matrix encoding is in the second state.

36. The apparatus of claim 35 wherein said adaptive 2:4 data matrix decoding means provides as its output four groups L', C', R', S' representing the four latticed data sets applied to the 4:2 adaptive data matrixing generally in accordance with the relationships [Figure - see DK for full documentation].

37. In a system in which the error level varies with group size level, apparatus for use in a decoder complementary to an encoder in which latticed data sets are divided into square components and the square components are subject to bit-rate reduction encoding, the decoder receiving the output of the encoder via transmission or storage and retrieval, wherein the decoder bit-rate-reduction decodes and assembles decoded square components into representations of the latticed data sets applied to the encoder, the encoder receiving the two data output groups of a 4:2 data group encoding matrix and the decoder applying decoded representations of the latticed data sets to a 2:4 data group decoding matrix, the encoder adaptively rematrixing square component representations of the 4:2 encoding matrix output groups such that in one state of the adaptive rematrixing the groups applied to the bit-rate reduction for transmission or storage are square components representations of the output of the encoding matrix and in another state of the adaptive rematrixing the groups applied to the bit-rate reduction encoding for transmission or storage are square component representations of the sum and difference of the output of the encoding matrix, said adaptive matrixing producing a control group indicating the state of the adaptive rematrixing, comprising decode adaptive rematrixing means receiving from the decoder bit-rate reduction decoded square component representations of said 4:2 encoder matrix output groups unaltered or the sum and difference thereof for producing square components which are assembled by the decoder into representations of the latticed data sets applied to the encoder by the 4:2 encoding matrix, the decode adaptive rematrixing means having a first state with characteristics substantially the same as the first state of the adaptive matrix encoding and a second state with characteristics substantially the same as the second state of the adaptive matrix encoding, and means receiving said control group from said transmission or storage and retrieval for controlling said decode adaptive rematrixing means in response to said control group, such that the decode adaptive rematrixing means operates in said first state when the matrix encoder output groups are applied to the transmission or storage and retrieval and the decode adaptive rematrixing means operates in said second state when the sum and difference of the matrix encoder output groups are applied to the transmission or storage and retrieval.

38. The apparatus of claim 37 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

39. The apparatus of claim 37 wherein the square components are grouped into square bands, and wherein said decode adaptive rematrixing means operates independently with respect to each or selected ones of square band grouped square components.

40. A method for adaptively rematrixing the data output groups of a 4:2 data group matrix for coding, transmission, or storage and retrieval in a system in which the error level varies with group size level, comprising determining which of the groups among the matrix output groups and the sum and difference of the matrix output groups has the smallest size, and applying the matrix output groups to the coding, transmission, or storage and retrieval if one of the matrix output groups has the smallest size and for applying the sum and difference of the matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the matrix output groups has the smallest size.

41. The method of claim 40 wherein the sum of the matrix output groups is a size weighted sum and the difference of the matrix output groups is a size weighted difference.

42. In a system for coding, transmission, or storage and retrieval of latticed data sets received from a 4:2 data group encoding matrix and applied to a complementary 2:4 data decoding matrix, the system having an error level which varies with group size level, a method comprising determining which of the groups among the encoding matrix output groups and the sum and difference of the encoding matrix output groups has the smallest size, applying the encoding matrix output groups to the coding, transmission, or storage and retrieval if one of the encoding matrix output groups has the smallest size and applying the sum and difference of the encoding matrix output groups to the coding, transmission, or storage and retrieval if one of the sum and difference of the encoding matrix output groups has the smallest size, and also applying a control group to the coding, transmission, or storage and retrieval indicating if the encoding matrix output groups or the sum and difference of the encoding matrix output groups is being applied to the transmission or storage, and receiving said matrix output groups or the sum and difference of the matrix output groups, and said control group from the coding, transmission, or storage and retrieval, and recovering unaltered, for use by the complementary 2:4 decoding matrix, the received groups when the matrix encoder output groups are applied to the coding, transmission, or storage and retrieval and recovering the sum and difference of the received groups, for use by the complementary 2:4 decoding matrix, when the sum and difference of the matrix encoder output groups are applied to the coding, transmission, or storage and retrieval.

43. The apparatus of claim 42 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

44. In a system in which the error level varies with group size level, a method for adaptively rematrixing groups received from coding, transmission, or storage and retrieval in response to a control group also received from the coding, transmission, or storage and retrieval for applying the adaptively rematrixed groups to a 2:4 data decoding matrix, the received groups resulting from encoding by a 4:2 data group encoding matrix and adaptive rematrixing of the encoding matrix output groups such that in one state of the adaptive rematrixing the groups applied to the coding, transmission, or storage and retrieval are the output of the encoding matrix and in another state of the adaptive rematrixing the groups applied to the coding, transmission, or storage and retrieval are the sum and difference of the output of the encoding matrix, said control group indicating the state of the adaptive rematrixing, comprising receiving said matrix output groups or the sum and difference of the matrix output groups from the coding, transmission, or storage and retrieval and producing latticed data sets representing the output of said 4:2 encoding matrix for application to said 2:4 decoding matrix, recovering unaltered the matrix output groups from the coding, transmission, or storage and retrieval in a first state of operation and recovering the sum and difference of the matrix output groups from the coding, transmission, or storage and retrieval in a second state of operation, and receiving said control group from said coding, transmission, or storage and retrieval and controlling the state of operation in response thereto such that when the matrix encoder output groups are applied to the coding, transmission, or storage and retrieval, the operation is in the first state and when the sum and difference of the matrix encoder output groups are applied to the coding, transmission, or storage and retrieval, the operation is in the second state.

45. The apparatus of claim 44 wherein the sum of the encoding matrix output groups is a size weighted sum and the difference of the encoding matrix output groups is a size weighted difference.

46. The apparatus of claim 32 wherein the sum of the received groups is a size weighted sum and the difference of the received groups is a size weighted difference.

47. The apparatus of claim 35 wherein the sum of the received groups is a size weighted sum and the difference of the received groups is a size weighted difference.

INVENTOR:

Anthony Beauchamp, Destiny IL

ASSIGNEE:

Sigma Corporation, Destiny IL

ABSTRACT: A data transformation system utilizes specialized matrixed predictive coding applied to data lattices to further reduce coded data information requirements. Changes are sequentially applied, and redundant processing cycles result in a low error rate.

REFERENCES CITED

BACKGROUND OF THE INVENTION

SUMMARY OF THE INVENTION

DETAILED DESCRIPTION OF THE INVENTION

WHAT IS CLAIMED

[All of the foregoing are available by fax or messenger from Sigma Legal; contact Darla Karlsson for details.]

INVENTOR:

Anthony Beauchamp, Destiny IL

ASSIGNEE:

Sigma Corporation, Destiny IL

ABSTRACT: A data transformation system utilizes generalized waveform predictive coding in bands to further reduce coded data information requirements. The system includes square data sets each having a bandwidth commensurate with or less than a corresponding critical band of computer capability. The order of the predictors are selected to balance requirements for prediction accuracy and rapid response time. Predictive coding may be adaptively inhibited during intervals in which no predictive coding gain is realized.

REFERENCES CITED

BACKGROUND OF THE INVENTION

SUMMARY OF THE INVENTION

DETAILED DESCRIPTION OF THE INVENTION

WHAT IS CLAIMED

[All of the foregoing are available by fax or messenger from Sigma Legal; contact Darla Karlsson for details.]

The Kappa Skunk Works was inaugurated in 1992 with a determined search and recruitment effort spearheaded by Os Kennell himself. By 1993 the original seven-person team was in place, consisting of the current members plus Researcher Tony Beauchamp and Director Charles MacGregor.

The Sigma Socket project was spearheaded by Drs. Escobar, Beauchamp, and Brezniak. Meanwhile, Drs. Beauchamp, Chiang, and Koenig worked on pure mathematical research aimed at eventually speeding up the entire Kappa product line, and Drs. MacGregor, Koenig, and Nolling investigated design problems with the 720 product family. All of these efforts were extremely successful from a technical point of view.

- The Sigma Socket was immediately hailed as a breakthrough and with Dr. Brezniak's detailed designs and production expertise loaned to other divisions as needed, was adopted as a standard throughout the Sigma product lines.
- Original and energetic applied mathematical research resulted in applications for over 50 separate patents by 1995.
- Drs. MacGregor and Nolling found the key problems in the 720 designs and revised the production plans accordingly. Dr. Nolling's tactful handling of the engineering teams (originally disposed to resent the Skunk Works) resulted in rapid adoption of the proposed changes and improvements.

These stellar achievements were somewhat marred by adminstrative infighting (mostly over resource allocation within Kappa), which culminated in the departure of Director Charles MacGregor in spring 1995. He cited the burdens of administration, "lack of recognition" for his work personally, and the "difficulty of getting Os Kennell's attention for more than ten seconds," as contributing factors in his resignation. Beauchamp expected to be made director at that point, but was outraged when the remaining scientists voted to ask Kennell to give the job to Marianne Nolling. Tony Beauchamp departed in the fall of 1995.

The remaining scientists seem to be functioning as a jelled team. They are reported to be somewhat distressed that, as Kappa's situation grew more desperate, less of their work reached the market. Nolling has streamlined the administrative and bureaucractic demands on the group and is considering eliminating differentiation in titles so that all group members can be "Senior Scientist" without distinction.

Senior Scientist (1993-present)

Director, Kappa Division Research & Development (1996-present)

Charged with ensuring optimal conditions for the transfer of technology from lab experiments and prototypes into marketable products.

Between 1985 and 1993, Dr. Nolling served as a high-level technology transfer consultant to corporations with extensive research budgets, from Microsoft and Intel to General Motors and Caterpillar. From 1975-1985 she participated in, then led, the Syzygy-Santa Cruz think tank project, which was jointly sponsored by Hewlett-Packard and Xerox Corporation. Prior to 1975 Dr. Nolling taught at Georgia Institute of Technology and Wharton Business School.

Ph.D. in Computer Science, Boston University, Boston MA

M.S., Ph.D. in Applied Physics, California Institute of Technology, Pasadena CA

B.S. in Physics and Germanic Languages and Literature (double major), Harvard College, Cambridge MA

Association for Computing Machinery (Fellow), The Institute of Electrical and Electronics Engineers, Inc. (Fellow), Transformation Technologists Working Group (Board Member), Society of Women Engineers.

Nolling, M., and Cathcart, Y. "High-Quality Data Transformation," Proceedings of International Transformation Journal, Vienna, Austria, April 1990.

International Organization for Standardization, "Coding of Transformational Data," ISO/IECF JTC1/SC2/WG11, MPEG 90/000, Berlin Meeting Documents, Dec. 1990, Appendix pp. 50-71

Mujuru, E., and Nolling, M. "Transformation in Transition: An Overview," International Conference on Data Processing, May 14-17, 1994, Harare, Zimbabwe, pp. 3601-3604.

Nolling, M. "Matrixed Data Transformations," Proc. ICDP-96, vol. 2, pp. II-205-208, IEEE, San Francisco, Mar. 23-26, 1996.

International Organization for Standardization, "Coding of Latticed Data," Document CD 11172-3, Part 3, Annex G, Sep. 1992, pp. G-1 and G-2.

**Responsibilities**

Senior Scientist, Kappa Division (1992-present)

Known as "the father of the Sigma Socket," Dr. Escobar has served as lead investigator on several key research projects within Kappa. He is respected by many scientists throughout Sigma for whom he has served as a mentor, and much in demand as a witty and charismatic speaker at industry and scientific meetings.

Researcher, Sigma Central Laboratories (1981-1992)

Science Associate (1971-1982) at Parasys Corp acquired by Sigma in 1982

Lecturer in Computational Physics (1969-1971), University of Chicago.

Ph.D. in Mathematics, University of Chicago, Chicago IL

M.S. in Computer Science, Northwestern University, Chicago, IL

BSEE, University of California at Los Angeles, CA

Transformation Technologists Working Group (Fellow, Chair of Nominating Committee for 1998), Association for Computing Machinery (Fellow), The Institute of Electrical and Electronics Engineers, Inc. (Fellow), Phi Beta Kappa, E. Clampus Vitus

International Organization for Standardization, "Coding of Transformational Data," ISO/IECF JTC1/SC2/WG11, MPEG 90/000, Berlin Meeting Documents, Dec. 1990, Appendix pp. 50-71

Researcher (1992-present)

Since being hired by Kappa's Skunk Works, Dr. Chiang has applied for 21 patents, 17 of which have been granted to date. With Dr. Koenig, he was granted a patent for his pretzel and rhizome transformations.

M.S., Ph.D. in Applied Mathematics, Stanford University, Palo Alto, CA

B.A. in Mathematics / Political Science (double major), Occidental College, Los Angeles, CA

Transformation Technologists Working Group, Association for Computing Machinery, The Institute of Electrical and Electronics Engineers, Inc.

Researcher (1993-present)

Dr. Koenig was a major contributor to the successful "720 Redesign." Since being hired by Kappa's Skunk Works, she has applied for, and been granted, nine patents, including the patents for the pretzel, rhizome, and corkscrew transformations (the former two shared with Dr. Chiang).

Between 1988 and 1993, Dr. Koenig served as a Senior Scientist with Raytheon Corporation. Prior to that, she worked in the research laboratories of International Business Machines (IBM) Corporation from 1979-1998. During that period she was also alLecturer in Mathematics at the State University of New York, Binghamton NY.

Ph.D. in Mathematics, Massachusetts Institute of Technology, Cambridge, MA

B.S., M.S. in Mathematics, University of Michigan, Ann Arbor, MI

American Mathematical Society (Past President), The Institute of Electrical and Electronics Engineers, Inc.

David, P., and Koenig, E. Obstructive Path Transformation, Yale University Press, New Haven, CT.

Koenig, E. "Data Filter Design Based on Domain Aliasing Transformation," IEEE Transformation Journal, vol. 5, Oct. 1987, pp. 1153-1161.

Koenig, E. "Survey of Data Coding Techniques," IEEE Transactions Special Issue on Transformation, vol. TRA-2, Nov. 1976, pp. 1275-1284.

Messer, K., and Koenig, E. Adaptive Structures, Algorithms and Applications, Addison-Wesley, Menlo Park, CA

Messer, K., and Koenig, E., "Transform Coding Using Correlation Between Successive Transform Blocks," IEEE Int. Conf. Trans., May 1988, pp. 2021-2024.

Koenig, E. "Estimation of Transformational Entropy," IEEE, Sep. 1995, pp. 2524-2527.

International Organization for Standardization, "Coding of Latticed Data," Document CD 11172-3, Part 3, Annex G, Sep. 1992, pp. G-1 and G-2.

Researcher (1992-present)

As the group's foremost mechanical engineer, Dr. Brezniak works closely with all parts of Kappa to prepare the detailed hardware specifications from which production designs can be constructed.

Engineering Team Leader, Lockheed Corporation (1985-1992)

Senior Designer, Carnegie Mellon Institute for Supercomputing (1979-1985)

M.S., Ph.D. in Applied Physics, Carnegie Mellon University, Pittsburgh, PA

B.S. in Physics, University of Pennsylvania, West Philadelphia, PA

The Institute of Electrical and Electronics Engineers, Inc.

Sigma Corporation's mission is to dominate its market by superlatively solving its customers' business problems. It is now so dominant in three of its four major market that it progresses chiefly by competing against its own excellent products. The fierce, sometimes even bitter competition between the Omega and Omicron divisions and product families has been the engine of the company's recent growth. The corporation as a whole has excellent prospects for worldwide expansion and is bidding to dominate world markets in its segment as thoroughly as it has dominated the domestic markets in the recent past.

Sigma is publicly held and traded, and its stock price fluctuated wildly during the first half of 1997. (Moderate fluctuations in the fourth quarter are thought to have been due to general market volatility; they have been far less significant than the wild swings earlier in the year.) Several steps taken in the second half of 1997 succeeded in damping these volatile swings, but Sigma's stock was down 40% (from $80 to $48 per share) at year end.

- Cross-functional cost management study groups met in each division throughout the third quarter, and group Directors implemented many of their recommendations. Sigma management is confident that it has reduced costs in several key areas without jeopardizing sales goals or product quality.
- More radical restructurings now being managed with the long term in mind include the spin-off of Sigma Consulting and the cancellation of the Kappa product line; both will allow us to invest more heavily in core businesses in 1998 and beyond.
- Sigma believes its stock price was been hurt by institutions' and funds' profit-taking, and by the fact that analysts' Q2 expectations were unrealistic. Sigma's investor relations team has been working with a respected outside agency to set analysts' expectations at a realistic level.
- Because Sigma uses stock options as a key compensation tool, the Sigma Human Assets Group managed fourth quarter adjustments in compensation programs to bolster employee morale.

To: All Employees (sigma_world)

From: Office of the President

Date: Tuesday, 16 September 1997

Subject: Restructuring, Departure of Os Kennell and Gene Bryce

As all of you are aware, Sigma's financial results for the first half of 1997, while excellent from our point of view, were not in line with analysts' even rosier expectations for our performance. Many of us found the resulting stock price declines both frustrating and demoralizing. During the third quarter we took several steps to address this problem, and I would like to review them briefly.

- Cross-functional cost management study groups have been meeting in each division throughout the third quarter, and our group Directors have implemented many of their recommendations. We are confident that in some areas we are reducing costs without jeopardizing sales goals or product quality.
- Many of you have expressed concern that, although we dominate three of the four market segments in which we participate, and have excellent prospects for worldwide growth, our stock price has been hurt by institutions' and funds' profit-taking, and by the fact that analysts' Q2 expectations were unrealistic. We have been working with our public relations agency to set analysts' expectations at a realistic level, and you should see the effect of those efforts in the current quarter.
- Our compensation committee has been examining this year's stock option grants, rendered less valuable by recent declines, and has developed a plan to adjust the values of these grants retroactively, to underscore our continued commitment to wide ownership of Sigma's equity by its employees. Retroactive adjustments will be made at the same time as the Q4 grants; details will be forthcoming.

However, these changes, important as they have been, can in some sense be described as mere reactions to the dip in our stock price. I and your Board of Directors are responsible for also taking a longer view, and proactively making changes that will ensure Sigma is well positioned to thrive in the next millennium. Today we are unveiling a dramatic two-part restructuring plan that will refocus corporate efforts and provide resources for aggressive growth in our core businesses.

Part I: Spin Off Sigma Consulting

Our Consulting division is now in the process of being spun off as a separate company. Sigma will retain minority ownership of the new business (yet to be named), and receive a cash payment from its new majority owners, a consortium of venture capitalists led by Kohlberg Kravis Roberts and partners. Sigma's own Chief Financial Officer, Gene Bryce, will lead the resulting firm. We are very sorry to lose Gene, but the new venture will benefit from his vision of how the consulting products, now so different from the software products in sales cycle, pricing, and delivery, can be made to be profitable.

As a result of Gene's departure, a reorganization of the Finance group will take place over the coming months. In the near term, Controllers Edward Sucheki and Patricia Rossignol will report to me directly. Further details about this reorganization will be forthcoming.

Part II: Retire the Kappa Product Family

Although several aspects of our patented Kappa technology remain exciting, and we are proud of the teams who have worked hard to make Kappa succeed, the expected market for our Kappa products has failed to materialize. Therefore, we have made the difficult decision to retire this entire product family. Kappa technology advances and its advanced EU distribution network will not be lost, as they will be added over time to Omega and Omicron, providing additional benefits to each of those product families. Most of Kappa's current resources will immediately be put to good use in the Omega and Omicron divisions, and we expect this restructuring to be fully underway by the end of Q4. However, a small number of departures and layoffs will accompany this reorganization.

Kappa's hard-charging Director, J. Osbert Kennell, will be leaving immediately to pursue other opportunities. Os has been a "true believer" whose ability to articulate the Kappa technology and approach have made Kappa what it is today; he will be sorely missed and we wish him well in his future endeavors.

After the final units have been delivered and installed later this month, Kappa sales and support personnel will first be offered first choice of all commensurate openings in other divisions and then, if matches cannot be found, a generous severance package and outsourcing services to assist them in finding jobs outside Sigma. Gene Bryce has advised us that he is very interested in working with the former Kappa personnel.

We regret the short-term stress of these reorganizations, and especially regret losing valuable team members. However, each of these changes will free up resources that can be better used by our three successful divisions. More information on how these changes will affect each of you will be forthcoming from your Directors and line managers, as well as the Office of the President, as the new structures fall into place.

In our global marketplace, the risks are great, but so are the rewards. These are difficult times, as Sigma attempts to adjust to global change while continuing to provide our customers with the best solutions possible and extend our domination of all three major product categories. Thank you in advance for remaining flexible and willing to make whatever changes may be necessary for all of us to succeed.

Sincerely,

Thomas Balpheimer

President and CEO

Sigma Worldwide

To All Employees (sigma_world)

From: Gene Bryce

Date: 2 January 1998

Subject: Sigma Consulting Wants You

Heeeeeeeeeeey! It's me again! But - for a change - I'm not whining about leaks to analysts, or cost per unit, or theft from the supply lockers in Building 5. Nooooooo! Instead, I'd like to invite you to be part of a very special launch, the launch of a company which I hope will help Sigma customers grow in productivity for years to come ...

** Sigma Consulting **

You know us already: we're the former Special Projects Group. Same great people, same award-winning training and
implementation products, but now with the power to expand and *augment* our offerings ... and the authority to price our full
smorgasbord *competitively *... and the autonomy to *stay out of your hair *while we do it. You're gonna love us!

** How You Can Be Involved **

First: We'll be taking on several key contributors from Kappa Division. Expect more announcements on this front soon. To give our Kappa colleagues first choice at SC's open jobs, I've pledged not to recruit any other Sigma employees for the rest of 1998. Kappas: email those CV's to Rosalie Kumar, ASAP!

Second: We'll be asking you for references and recommendations. Give your customers the good word: we can help them get
productive fast! We've won awards in the past - imagine what a great job we can do of getting your folks up to speed in our
new *distraction-free* business unit. Woweeeeeeee! Newly focused SC sales materials will be forthcoming through the usual
channels.

Third: Come to our launch party! Main Campus Gymnasium, Friday January 9th, 4-7 pm. Beer, wine, food, music, SC
sweats and sports bags for all, and *free office supplies* (well, I'm kidding about that last item, but it's gonna be a great time
for all). Kick off a great year with us!

To sum up: your former colleagues at the new Sigma Consulting World Headquarters look forward to working productively together with all of you in 1998. When SC wins, Sigma wins ... we all win!!!

See you next Friday!

Gene