name: Prediction : public AlgorithmBase

synopsis:

```g++ [flags ...] file ... -l /isip/tools/lib/\$ISIP_BINARY/lib_algo.a

#include <Prediction.h>

Prediction(ALGORITHM algorithm = DEF_ALGORITHM, IMPLEMENTATION implementation = DEF_IMPLEMENTATION, long order = DEF_ORDER, float dyn_range = DEF_DYNAMIC_RANGE);
boolean eq(const Prediction& arg);
boolean setAlgorithm(ALGORITHM algorithm);
boolean setOrder(long order);
```
quick start:

```Prediction lp;
VectorFloat input(L"1.0, 2,0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0");
VectorFloat output;
lp.set(Prediction::AUTOCORRELATION, Prediction::DURBIN, 4, -40);
lp.compute(output, input);
```
description:

Linear prediction (LP) was once an integral part of most speech recognition systems. Its rose to prominence in the 1970's, which was one of the most exciting times in digital signal processing history. Since then, LP has been replaced by traditional Fourier transform techniques in most speech processing systems. The Prediction class is one of several classes that provide a complete inventory of linear prediction manipulations. The definitive textbook on this topic is:
J.D. Markel and A.H. Gray, Linear Prediction of Speech, Springer-Verlag, New York, New York, USA, 1980.
A good explanation of how these techniques relate to speech recognition can be found in:
The algorithm and implementation choices are shown below:

Most of the standard algorithms above can be found in the references cited above. A good alternate reference for the AUTOCORRELATION, COVARIANCE, and LATTICE algorithms is:
L. Rabiner and R. Schafer, Digital Processing of Speech Signals, Prentice-Hall, Englewood Cliffs, New Jersey, USA, pp. 396, 1976.
The log area ratio parameters are described in both textbooks. The lattice methods are provided for historical reasons, but are no that popular in speech processing today.

Aside from providing users to set the analysis order, a noise-weighting option is also provided. This is described in the IEEE Proceedings paper cited above, and is used to prevent the LP analysis from modeling low energy areas of the spectrum.

The covariance matrix supplied to this method is in the format output by the Covariance class. This matrix contains all covariance values necessary to perform the LP computations, and includes lags from [0,0] to [p,p]. It can be considered an augmented matrix when compared to the matrices used in the textbooks cited above.

dependencies:

public constants:

• define the class name:
`static const String CLASS_NAME = L"Prediction";`
• define algorithm types:
`enum ALGORITHM { AUTOCORRELATION = 0, COVARIANCE, LATTICE, REFLECTION, LOG_AREA_RATIO, DEF_ALGORITHM = AUTOCORRELATION };`
• define implementation types:
`enum IMPLEMENTATION { DURBIN = 0, LEROUX_GUEGUEN, CHOLESKY, BURG, STEP_DOWN, KELLY_LOCHBAUM, DEF_IMPLEMENTATION = DURBIN };`
• define static NameMap objects:
`static const NameMap ALGO_MAP = L"AUTOCORRELATION, COVARIANCE, LATTICE, REFLECTION, LOG_AREA_RATIO";`
`static const NameMap IMPL_MAP = L"DURBIN, LEROUX_GUEGUEN, CHOLESKY, BURG, STEP_DOWN, KELLY_LOCHBAUM";`
• i/o related constants:
`static const String DEF_PARAM = L"";`
`static const String PARAM_ALGORITHM = L"algorithm";`
`static const String PARAM_IMPLEMENTATION = L"implementation";`
`static const String PARAM_ORDER = L"order";`
`static const String PARAM_DYN_RANGE = L"dynamic_range";`
• define default value(s) of the class data:
`static const long DEF_ORDER = -1;`
`static const float DEF_DYNAMIC_RANGE = (float)-100.0;`
• define default argument(s):
`static const AlgorithmData::COEF_TYPE DEF_COEF_TYPE = AlgorithmData::CORRELATION;`
error codes:

• error code indicating Prediction class general error:
`static const long ERR = (long)71100;`
`static const long ERR_DYNRANGE = (long)71101;`
`static const long ERR_BETA = (long)71102;`
`static const long ERR_ENERGY = (long)71103;`
`static const long ERR_PREDERR = (long)71104;`
protected data:

required public methods:

• static methods:
`static const String& name();`
`static boolean diagnose(Integral::DEBUG debug_level);`
• debug methods: setDebug method is inherited from base class
`boolean debug(const unichar* message) const;`
• destructor/constructor(s):
`~Prediction();`
`Prediction(ALGORITHM algorithm = DEF_ALGORITHM, IMPLEMENTATION implementation = DEF_IMPLEMENTATION, long order = DEF_ORDER, float dyn_range = DEF_DYNAMIC_RANGE);`
`Prediction(const Prediction& arg);`
• assign methods:
`boolean assign(const Prediction& arg);`
• operator= methods:
`Prediction& operator= (const Prediction& arg);`
• i/o methods:
`long sofSize() const;`
`boolean read(Sof& sof, long tag, const String& name = CLASS_NAME);`
`boolean write(Sof& sof, long tag, const String& name = CLASS_NAME) const;`
`boolean readData(Sof& sof, const String& pname = DEF_PARAM, long size = SofParser::FULL_OBJECT, boolean param = true, boolean nested = false);`
`boolean writeData(Sof& sof, const String& name = DEF_PARAM) const;`
• equality methods:
`boolean eq(const Prediction& arg) const;`
• memory management methods:
`static void* operator new(size_t size);`
`static void* operator new[](size_t size);`
`static void operator delete(void* ptr);`
`static void operator delete[](void* ptr);`
`static boolean setGrowSize(long grow_size);`
`boolean clear(Integral::CMODE ctype = Integral::DEF_CMODE);`
class-specific public methods:

• set methods:
`boolean setAlgorithm(ALGORITHM algorithm);`
`boolean setImplementation(IMPLEMENTATION implementation);`
`boolean setOrder(long order);`
`boolean setDynRange(float dyn_range);`
`boolean set(ALGORITHM algorithm = DEF_ALGORITHM, IMPLEMENTATION implementation = DEF_IMPLEMENTATION, long order = DEF_ORDER, float dyn_range = DEF_DYN_RANGE);`
• get methods:
`ALGORITHM getAlgorithm() const;`
`IMPLEMENTATION getImplementation() const;`
`long getOrder() const;`
`float getDynRange() const;`
`boolean get(ALGORITHM& algorithm, IMPLEMENTATION& implementation, long& order, float& dyn_range) const;`
• computational methods:
`boolean compute(VectorFloat& output, const VectorFloat& input, long index = DEF_CHANNEL_INDEX, AlgorithmData::COEF_TYPE input_coef_type = DEF_COEF_TYPE, long index = DEF_CHANNEL_INDEX);`
`boolean compute(VectorFloat& output, const MatrixFloat& input, long index = DEF_CHANNEL_INDEX, AlgorithmData::COEF_TYPE input_coef_type = DEF_COEF_TYPE, long index = DEF_CHANNEL_INDEX);`
`boolean compute(VectorFloat& output, float& err_energy, const VectorFloat& input, long index = DEF_CHANNEL_INDEX, AlgorithmData::COEF_TYPE input_coef_type = DEF_COEF_TYPE, long index = DEF_CHANNEL_INDEX);`
`boolean compute(VectorFloat& output, float& err_energy, const MatrixFloat& input, long index = DEF_CHANNEL_INDEX, AlgorithmData::COEF_TYPE input_coef_type = DEF_COEF_TYPE, long index = DEF_CHANNEL_INDEX);`
• AlgorithmBase interface contract methods:
`boolean assign(const AlgorithmBase& arg);`
`boolean eq(const AlgorithmBase& arg) const;`
`const String& className() const;`
`boolean init();`
`boolean apply(Vector<AlgorithmData>& output, const Vector< CircularBuffer<AlgorithmData> >& input);`
`boolean setParser(SofParser* parser);`
private methods:

• algorithm-specific i/o methods:
`boolean readDataCommon(Sof& sof, const String& pname, long size = SofParser::FULL_OBJECT, boolean param = true, boolean nested = false);`
`boolean writeDataCommon(Sof& sof, const String& pname);`
• algorithm and implementation specific computational methods:
`boolean computeAutoDurbin(VectorFloat& output, float& err_energy, const VectorFloat& input);`
`boolean computeCovarCholesky(VectorFloat& output, float& err_energy, const MatrixFloat& input);`
`boolean computeLatticeBurg(VectorFloat& output, float& err_energy, const VectorFloat& input);`
`boolean computeReflectionStepDown(VectorFloat& output, const VectorFloat& input);`
`boolean computeLogAreaKellyLochbaum(VectorFloat& output, const VectorFloat& input);`
examples:

• This example shows how to compute the prediction coefficients using the Burg analysis:
```Prediction lp;
VectorFloat input;
VectorFloat pred_coef;

// use the following data as input:
//
//  x(n) = 0 when n = 0, 1, 2, 3;
//  x(n) = 1*pow(0.99, n-4) - pow(0.99, 2(n-4)),  when 4 <= n < 20;
//  x(n) = 0 when n = 20, 21, 22, 23
//
input.setLength(24);

double z = 1;
for (long i = 4; i < 20; i++) {
input(i) = 2 * z - z * z;
z = 0.99 * z;
}

// set the order, algorithm and dynamic threshold
//
long order = 4;
float dyn_range = -60;

lp.set(order, Prediction::LATTICE, Prediction::BURG, dyn_range);

// compute the prediction coefficents
//
lp.compute(pred_coef, input, AlgorithmData::SIGNAL, long(0));```
notes:

• none.