Data Compression Coding And Representation Computer Science

Abstract- this paper includes the brief sum-up of informations compaction. It besides includes the assorted techniques of informations compaction such as Huffman method, inactive defined sceme, adaptative Huffman cryptography. It besides includes the future range of the informations compaction technique.


Academic anxiety?
Get original paper in 3 hours and nail the task
Get your paper price

124 experts online

Data compaction is frequently referred to as cryptography, where cryptography is a really general term embracing any particular representation of informations which satisfies a given demand. Information theory is defined to be the survey of efficient cryptography and its effects, in the signifier of velocity of transmittal and chance of mistake [ Ingels 1971 ] . Data compaction may be viewed as a subdivision of information theory in which the primary aim is to minimise the sum of informations to be transmitted. The intent of this paper is to present and analyze a assortment of informations compaction algorithms.

A simple word picture of informations compaction is that it involves transforming a twine of characters in some representation ( such as ASCII ) into a new twine ( of spots, for illustration ) which contains the same information but whose length is every bit little as possible. Data compaction has of import application in the countries of informations transmittal and informations storage. Many informations processing applications require storage of big volumes of informations, and the figure of such applications is invariably increasing as the usage of computing machines extends to new subjects. At the same clip, the proliferation of computing machine communicating webs is ensuing in monolithic transportation of informations over communicating links. Compressing informations to be stored or transmitted reduces storage and/or communicating costs. When the sum of informations to be transmitted is reduced, the consequence is that of increasing the capacity of the communicating channel. Similarly, compacting a file to half of its original size is tantamount to duplicating the capacity of the storage medium. It may so go executable to hive away the informations at a higher, therefore faster, degree of the storage hierarchy and cut down the burden on the input/output channels of the computing machine system.


A. ) Definition

A codification is a function of beginning messages ( words from the beginning alphabet alpha ) into codewords ( words of the codification alphabet beta ) . The beginning messages are the basic units into which the twine to be represented is partitioned. These basic units may be individual symbols from the beginning alphabet, or they may be strings of symbols. For threading EXAMPLE, alpha = { a, B, degree Celsius, vitamin D, vitamin E, degree Fahrenheit, g, infinite } . For intents of account, beta will be taken to be { 0, 1 } . Codes can be categorized as block-block, block-variable, variable-block or variable-variable, where block-block indicates that the beginning messages and codewords are of fixed length and variable-variable codifications map variable-length beginning messages into variable-length codewords. A block-block codification for EXAMPLE is shown in Figure 1.1 and a variable-variable codification is given in Figure 1.2. If the twine EXAMPLE were coded utilizing the Figure 1.1 codification, the length of the coded message would be 120 ; utilizing Figure 1.2 the length would be 30.

beginning message codeword beginning message codeword

a 000 aa 0

B 001 bbb 1

degree Celsiuss 010 cccc 10

vitamin D 011 ddddd 11

e 100 eeeeee 100

f 101 fffffff 101

g 110 gggggggg 110

infinite 111 infinite 111

Figure 1.1: A block-block codification

The oldest and most widely used codifications, ASCII and EBCDIC, are illustrations of block-block codifications, mapping an alphabet of 64 ( or 256 ) individual characters onto 6-bit ( or 8-bit ) codewords. These are non discussed, as they do non supply compaction. The codifications featured in this study are of the block-variable, variable-variable, and variable-block types.

Categorization OF METHODS

In add-on to the classification of informations compaction strategies with regard to message and codeword lengths, these methods are classified as either inactive or dynamic. A inactive method is one in which the function from the set of messages to the set of codewords is fixed before transmittal Begins, so that a given message is represented by the same codeword every clip it appears in the message ensemble. The authoritative inactive defined-word strategy is Huffman coding.

HUFFMAN Cryptography

Huffman codifications are optimum prefix codifications generated from a set of chances by a peculiar algorithm, the Huffman Coding Algorithm. David Huffman developed the algorithm as a pupil in a 12 category on information theory at MIT in 1950. The algorithm is now likely the most prevalently used constituent of compaction algorithms, used as the back terminal of GZIP, JPEG and many other public-service corporations.

The Huffman algorithm is really simple and is most easy described in footings of how it generates the prefix-code tree.

Start with a wood of trees, one for each message. Each tree contains a individual vertex with Weight

Repeat until merely a individual tree remains

Choice two trees with the lowest weight roots ( and. “ ) .

Unite them into a individual tree by adding a new root with weight, and doing the two trees its kids. It does non count which is the left or right kid, but our convention will be to set the lower weight root on the left if.

For a codification of size N this algorithm will necessitate n-1 $ stairss since every complete binary tree with foliages has n-1 $ internal nodes, and each measure creates one internal node. If we use a precedence waiting line. With O ( log n ) clip interpolations and find-mins ( e.g. , a pile ) the algorithm will run in O ( n log N ) clip.

The Huffman algorithm generates an optimum prefix codification.

Proof: The cogent evidence will be on initiation of the figure of messages in the codification. In peculiar we will demo that if the Huffman codification generates an optimum prefix codification for all chance distributions of n messages, so it generates an optimum prefix codification for all distributions of n+1 $ messages. The base instance is fiddling since the prefix codification for 1 message is alone ( i.e. , the void message ) and hence optimum.

We foremost argue that for any set of messages S there is an optimum codification for which the two minimal chance messages are siblings ( have the same parent in their prefix tree ) . We know that the two minimal chances are on the lowest degree of the tree ( any complete double star tree has at least two foliages on its lowest degree ) . Besides, we can exchange any foliages on the lowest degree without impacting the mean length of the codification since all these codifications have the same length. We therefore can merely exchange the two lowest chances so they are siblings


Now for initiation we consider a set of message chances S of size n+1 and the corresponding tree T built by the Huffman algorithm. Name the two lowest chance nodes in the tree x.and y, which must be siblings in T because of the design of the algorithm. See the tree gotten by replacing ten.and Y with their parent, name it z, with chance ( this is efficaciously what the Huffman algorithm does ) . Lashkar-e-taibas say the deepness of omega is d, so

– ( 4 )

_N- ( 5 )

To see that is optimum, note that there is an optimum tree in which x and Ys are siblings, and that wherever we place these siblings they are traveling to add a changeless px+ py to the mean length of any prefix tree on _with the brace.and replaced with their parent. By the initiation hypothesis La ( T ‘ ) B_is minimized, since is of size and built by the Huffman algorithm, and hence La ( T ) is minimized and is optimum. Since Huffman cryptography is optimum we know that for any chance distribution and associated Huffman codification

H ( S ) a‰¤ La ( C ) a‰¤ H ( S ) + 1


Even though Huffman codifications are optimum comparative to other prefix codifications, prefix codifications can be rather inefficient relation to the information. In peculiar H ( s ) could be much less than 1 and so the excess $ in H ( s ) +1 could be really important. One manner to cut down the per-message operating expense is to group messages. This is peculiarly easy if a sequence of messages are all from the same chance distribution. See a distribution of six possible messages. We could bring forth chances for all 36 braces by multiplying the chances of each message ( there will be at most 21 alone chances ) . A Huffman codification can now be generated for this new chance distribution and used to code two messages at a clip. Note that this technique is non taking advantage of conditional chances since it straight multiplies the chances. In general by grouping.messages the operating expense of Huffman coding can be reduced from 1 spot per message to spots per message. The job with this technique is that in pattern messages are frequently non from the same distribution and unifying messages from different distributions can be expensive because of all the possible chance combinations that might hold to be generated.


The Huffman coding algorithm has some flexibleness when two equal frequences are found. The pick made in such state of affairss will alter the concluding codification including perchance the codification length of each message. Since all Huffman codifications are optimum, nevertheless, it can non alter the mean length.

For illustration, see the undermentioned message chances, and codifications. symbol chance codification 1 codification 2













degree Celsiuss




vitamin D




vitamin E




Both cryptographies produce an norm of 2.2 spots per symbol, even though the lengths are rather different in the two codifications. For some applications it can be helpful to cut down the discrepancy in the codification length. The discrepancy is defined as

With lower discrepancy it can be easier to keep a changeless character transmittal rate, or cut down the size of buffers. In the above illustration, codification 1 clearly has a much higher discrepancy than codification 2. It turns out that a simple alteration to the Huffman algorithm can be used to bring forth a codification that has minimum discrepancy.

Figure 2: Binary tree for Huffman codification 2

In peculiar when taking the two nodes to unify and there is a pick based on weight, ever pick the node that was created earliest in the algorithm. Leaf nodes are assumed to be created before all internal nodes. In the illustration above, after vitamin D and vitamin E are joined, the brace will hold the same chance as degree Celsius and a ( .2 ) , but it was created afterwards, so we join hundred and a. Similarly we select b alternatively of Ac to fall in with de since it was created before. This will give codification 2 above, and the corresponding Huffman tree in Figure 2.


In order to discourse the comparative virtues of informations compaction techniques, a model for comparing must be established. There are two dimensions along which each of the strategies discussed here may be measured, algorithm complexness and sum of compaction. When informations compaction is used in a information transmittal application, the end is speed. Speed of transmittal depends upon the figure of spots sent, the clip required for the encoder to bring forth the coded message, and the clip required for the decipherer to retrieve the original ensemble. In a information storage application, although the grade of compaction is the primary concern, it is however necessary that the algorithm be efficient in order for the strategy to be practical. For a inactive strategy, there are three algorithms to analyse: the map building algorithm, the encoding algorithm, and the decryption algorithm. For a dynamic strategy, there are merely two algorithms: the encoding algorithm, and the decryption algorithm.

Several common steps of compaction have been suggested: redundancy [ Shannon and Weaver 1949 ] , mean message length [ Huffman 1952 ] , and compaction ratio [ Rubin 1976 ; Ruth and Kreutzer 1972 ] . These steps are defined below. Related to each of these steps are premises about the features of the beginning. It is by and large assumed in information theory that all statistical parametric quantities of a message beginning are known with perfect truth [ Gilbert 1971 ] . The most common theoretical account is that of a distinct memoryless beginning ; a beginning whose end product is a sequence of letters ( or messages ) , each missive being a choice from some fixed alphabet a. Without loss of generalization, the codification alphabet is assumed to be { 0,1 } throughout this paper. The alterations necessary for larger codification alphabets are straightforward.

When informations is compressed, the end is to cut down redundancy, go forthing merely the informational content. The step of information of a beginning message x ( in spots ) is -lg P ( x ) [ lg denotes the base 2 logarithm ] . Since the length of a codeword for message a ( I ) must be sufficient to transport the information content of a ( I ) , entropy imposes a lower edge on the figure of spots required for the coded message. The entire figure of spots must be at least every bit big as the merchandise of H and the length of the beginning ensemble. Since the value of H is by and large non an whole number, variable length codewords must be used if the lower edge is to be achieved.


The authoritative defined-word strategy was developed over 30 old ages ago in Huffman ‘s well-known paper on minimum-redundancy coding [ Huffman 1952 ] . Huffman ‘s algorithm provided the first solution to the job of building minimum-redundancy codifications. Many people believe that Huffman coding can non be improved upon, that is, that it is guaranteed to accomplish the best possible compaction ratio. This is merely true, nevertheless, under the restraints that each beginning message is mapped to a alone codeword and that the tight text is the concatenation of the codewords for the beginning messages. An earlier algorithm, due independently to Shannon and Fano [ Shannon and Weaver 1949 ; Fano 1949 ] , is non guaranteed to supply optimum codifications, but approaches optimum behaviour as the figure of messages attacks eternity. The Huffman algorithm is besides of importance because it has provided a foundation upon which other informations compaction techniques have built and a benchmark to which they may be compared. We classify the codifications generated by the Huffman and Shannon-Fano algorithms as variable-variable and note that they include block-variable codifications as a particular instance, depending upon how the beginning messages are defined.

SHANNON-FANO Cryptography

The Shannon-Fano technique has as an advantage its simpleness. The codification is constructed as follows: the beginning messages a ( I ) and their chances p ( a ( I ) ) are listed in order of nonincreasing chance. This list is so divided in such a manner as to organize two groups of as about equal entire chances as possible. Each message in the first group receives 0 as the first figure of its codeword ; the messages in the 2nd half have codewords get downing with 1. Each of these groups is so divided harmonizing to the same standard and extra codification figures are appended. The procedure is continued until each subset contains merely one message. Clearly the Shannon-Fano algorithm outputs a minimum prefix codification.

a 1/2 0

B 1/4 10

degree Celsiuss 1/8 110

d 1/16 1110

e 1/32 11110

f 1/32 11111

ARITHMETIC Cryptography

Arithmetical cryptography is a technique for coding that allows the information from the messages in a message sequence to be combined to portion the same spots. The technique allows the entire figure of spots sent to asymptotically near the amount of the self information of the single messages ( remember that the self information of a message is defined as

To see the significance of this, see directing a 1000 messages each holding chance

0.999. Using a Huffman codification, each message has to take at least 1 spot, necessitating 1000 spots to be sent. On the other manus the ego information of each message is spots, so the amount of this self-information over 1000 messages is merely 1.4 spots. It turns out that arithmetic cryptography will direct all the messages utilizing merely 3 spots, a factor of 100s fewer than a Huffman programmer. Of class this is an utmost instance, and when all the chances are little, the addition will be less important. Arithmetical programmers are hence most utile when there are big chances in the chance distribution.

The chief thought of arithmetic cryptography is to stand for each possible sequence of n messages by a separate interval on the figure line between 0 and 1, e.g. the interval from.2 to.5. For a sequence of messages with chances. , the algorithm will delegate the sequence to an interval of size by get downing with an interval of size 1 ( from 0 to 1 ) and contracting the interval by a factor of on each message I ‘ . We can jump the figure of spots required to unambiguously place an interval of size s, and utilize this to associate the length of the representation to the self information of the messages.

In the undermentioned treatment we assume the decipherer knows when a message sequence is complete either by cognizing the length of the message sequence or by including a particular end-of-file message. This was besides implicitly assumed when directing a sequence of messages with Huffman codifications since the decipherer still needs to cognize when a message sequence is over.

We will denote the chance distributions of a message set as and we

Figure 3: An illustration of bring forthing an arithmetic codification presuming all messages are from the same chance distribution a=.2, b=.5 and c=.3. The interval given by the message sequence babc is [ .255, .27 ] specify the accrued chance for the chance distribution as

( 6 )

So, for illustration, the chances [ .2, .5, .3 ] correspond to the accrued chances [ 0, .2, .7 ] . Since we will frequently be speaking about sequences of messages, each perchance from a different chance distribution, we will denote the chance distribution of the message as and the accrued chances as For a peculiar sequence of message values, we denote the index of the + message value as We will utilize the stenography for and for.Arithmetic cryptography assigns an interval to a sequence of messages utilizing the undermentioned returns

( 7 )

Where is the lower edge of the interval the size of the interval, i.e. the interval is given by. We assume the interval is inclusive of the lower edge, but sole of the upper edge. The return narrows the interval on each measure to some portion of the old interval. Since the interval starts in the scope [ 0,1 ) , it ever stays within this scope. An illustration of bring forthing an interval for a short message sequences is illustrated in Figure 3. An of import belongings of the intervals generated by Equation 7 is that all alone message sequences of length N will hold non overlapping intervals. Stipulating an interval hence unambiguously determines the message sequence.

In fact, any figure within an interval unambiguously determines the message sequence. The occupation of decryption is fundamentally the same as encoding but alternatively of utilizing the message value to contract the interval, we use the interval to choose the message value, and so contract it. We can therefore “ direct ” a message sequence by stipulating a figure within the corresponding interval.

The method of arithmetic cryptography was suggested by Elias, and presented by Abramson in his text on Information Theory [ Abramson 1963 ] . Executions of Elias ‘ technique were developed by Rissanen, Pasco, Rubin, and, most late, Witten et Al. [ Rissanen 1976 ; Pasco 1976 ; Rubin 1979 ; Witten et al. 1987 ] . We present the construct of arithmetic cryptography foremost and follow with a treatment of execution inside informations and public presentation.

In arithmetic coding a beginning ensemble is represented by an interval between 0 and 1 on the existent figure line. Each symbol of the ensemble narrows this interval. As the interval becomes smaller, the figure of spots needed to stipulate it grows. Arithmetical coding assumes an expressed probabilistic theoretical account of the beginning. It is a defined-word strategy which uses the chances of the beginning messages to in turn contract the interval used to stand for the ensemble. A high chance message narrows the interval less than a low chance message, so that high chance messages contribute fewer spots to the coded ensemble. The method begins with an disordered list of beginning messages and their chances. The figure line is partitioned into subintervals based on cumulative chances. A little illustration will be used to exemplify the thought of arithmetic cryptography. Given beginning messages { A, B, C, D, # } with chances { .2, .4, .1, .2, .1 } , Figure 3.9 demonstrates the initial breakdown of the figure line. The symbol A corresponds to the first 1/5 of the interval [ 0,1 ) ; B the following 2/5 ; D the subinterval of size 1/5 which begins 70 % of the manner from the left end point to the right. When encoding Begins, the beginning ensemble is represented by the full interval [ 0,1 ) . For the ensemble AADB # , the first A reduces the interval to [ 0, .2 ) and the 2nd A to [ 0, .04 ) ( the first 1/5 of the old interval ) . The D further narrows the interval to [ .028, .036 ) ( 1/5 of the old size, get downing 70 % of the distance from left to compensate ) . The B narrows the interval to [ .0296, .0328 ) , and the # outputs a concluding interval of [ .03248, .0328 ) . The interval, or instead any figure I within the interval, may now be used to stand for the beginning ensemble.

Beginning Probability Cumulative Range

message chance

A.2.2 [ 0, .2 )

B.4.6 [ .2, .6 )

C.1.7 [ .6, .7 )

D.2.9 [ .7, .9 )

# .1 1.0 [ .9,1.0 )


Adaptive Huffman cryptography was foremost conceived independently by Faller and Gallager [ Faller 1973 ; Gallager 1978 ] . Knuth contributed betterments to the original algorithm [ Knuth 1985 ] and the ensuing algorithm is referred to as algorithm FGK. A more recent version of adaptative Huffman cryptography is described by Vitter [ Vitter 1987 ] . All of these methods are defined-word strategies which determine the function from beginning messages to codewords based upon a running estimation of the beginning message chances. The codification is adaptative, altering so as to stay optimum for the current estimations. In this manner, the adaptative Huffman codes respond to vicinity. In kernel, the encoder is “ larning ” the features of the beginning. The public presentation of the adaptative methods can besides be worse than that of the inactive method. Upper bounds on the redundancy of these methods are presented in this subdivision. As discussed in the debut, the adaptative method of Faller, Gallager and Knuth is the footing for the UNIX public-service corporation compact. The public presentation of compact is rather good, supplying typical compaction factors of 30-40 % .


The footing for algorithm FGK is the Sibling Property, defined by Gallager [ Gallager 1978 ] : A binary codification tree has the sibling belongings if each node ( except the root ) has a sibling and if the nodes can be listed in order of nonincreasing weight with each node next to its sibling. Gallager proves that a binary prefix codification is a Huffman codification if and merely if the codification tree has the sibling belongings. In algorithm FGK, both transmitter and receiving system maintain dynamically altering Huffman codification trees. The foliages of the codification tree represent the beginning messages and the weights of the foliages represent frequence counts for the messages. At any point in clip, K of the n possible beginning messages have occurred in the message ensemble.

Figure 4.1 — Algorithm FGK treating the ensemble EXAMPLE ( a ) Tree after treating “ aa BB ” ; 11 will be transmitted for the following B. ( B ) After encoding the 3rd B ; 101 will be transmitted for the following infinite ; the tree will non alter ; 100 will be transmitted for the first c. ( degree Celsius ) Tree after update following first degree Celsiuss.

Initially, the codification tree consists of a individual foliage node, called the 0-node. The 0-node is a particular node used to stand for the n-k fresh messages. For each message transmitted, both parties must increment the corresponding weight and recompute the codification tree to keep the sibling belongings. At the point in clip when T messages have been transmitted, K of them distinct, and k & lt ; n, the tree is a legal Huffman codification tree with k+1 foliages, one for each of the K messages and one for the 0-node. If the ( t+1 ) st message is one of the K already seen, the algorithm transmits a ( t+1 ) ‘s current codification, increments the appropriate counter and recomputes the tree. If an fresh message occurs, the 0-node is split to make a brace of foliages, one for a ( t+1 ) , and a sibling which is the new 0-node. Again the tree is recomputed. In this instance, the codification for the 0-node is sent ; in add-on, the receiving system must be told which of the n-k fresh messages has appeared. At each node a count of happenings of the corresponding message is stored. Nodes are numbered bespeaking their place in the sibling belongings ordination. The updating of the tree can be done in a individual traverse from the a ( t+1 ) node to the root. This traverse must increment the count for the a ( t+1 ) node and for each of its ascendants. Nodes may be exchanged to keep the sibling belongings, but all of these exchanges involve a node on the way from a ( t+1 ) to the root. Figure 4.2 shows the concluding codification tree formed by this procedure on the ensemble EXAMPLE.

Figure 4.2 — Tree formed by algorithm FGK for ensemble EXAMPLE.

Ignoring operating expense, the figure of spots transmitted by algorithm FGK for the EXAMPLE is 129. The inactive Huffman algorithm would convey 117 spots in treating the same information. The operating expense associated with the adaptative method is really less than that of the inactive algorithm. In the adaptative instance the lone operating expense is the n lg N spots needed to stand for each of the n different beginning messages when they appear for the first clip. ( This is in fact conservative ; instead than conveying a alone codification for each of the n beginning messages, the transmitter could convey the message ‘s place in the list of staying messages and salvage a few spots in the mean instance. ) In the inactive instance, the beginning messages need to be sent as does the form of the codification tree. Figure 4.3 illustrates an illustration on which algorithm FGK performs better than inactive Huffman coding even without taking overhead into history. Algorithm FGK transmits 47 spots for this ensemble while the inactive Huffman codification requires 53.

Figure 4.3 — Tree formed by algorithm FGK for ensemble “ e eae de eabe eae dcf ” .


Figure 4.4 — FGK tree with non-level order enumeration.

The adaptative Huffman algorithm of Vitter ( algorithm V ) incorporates two betterments over algorithm FGK. First, the figure of interchanges in which a node is moved upward in the tree during a recomputation is limited to one. This figure is bounded in algorithm FGK merely by l/2 where cubic decimeter is the length of the codeword for a ( t+1 ) when the recomputation begins. Second, Vitter ‘s method minimizes the values of SUM { cubic decimeter ( I ) } and MAX { cubic decimeter ( I ) } capable to the demand of minimising SUM { tungsten ( I ) cubic decimeter ( I ) } . The intuitive account of algorithm V ‘s advantage over algorithm FGK is as follows: as in algorithm FGK, the codification tree constructed by algorithm V is the Huffman codification tree for the prefix of the ensemble seen so far. The adaptative methods do non presume that the comparative frequences of a prefix represent accurately the symbol probabilities over the full message. Therefore, the fact that algorithm V guarantees a tree of minimal tallness ( height = MAX { cubic decimeter ( I ) } and minimal external way length ( SUM { cubic decimeter ( I ) } ) implies that it is better suited for coding the following message of the ensemble, given that any of the foliages of the tree may stand for that following message.

Figure 4.5 — Algorithm V treating the ensemble “ aa bbb degree Celsius ” .

Figure 4.6 ilustrates the tree built by Vitter ‘s method for the ensemble of Figure 4.3. Both SUM { cubic decimeter ( I ) } and MAX { cubic decimeter ( I ) } are smaller in the tree of Figure 4.6. The figure of spots transmitted during the processing of the sequence is 47, the same used by algorithm FGK. However, if the transmittal continues with vitamin D, B, degree Celsius, degree Fahrenheit or an fresh missive, the cost of algorithm V will be less than that of algorithm FGK. This once more illustrates the benefit of minimising the external way length SUM { cubic decimeter ( I ) } and the tallness MAX { cubic decimeter ( I ) } .

Figure 4.6 — Tree formed by algorithm V for the ensemble of Fig. 4.3.

Vitter proves that the public presentation of his algorithm is bounded by S – N + 1 from below and S + t – 2n + 1 from above [ Vitter 1987 ] . At worst so, Vitter ‘s adaptative method may convey one more spot per codeword than the inactive Huffman method. The betterments made by Vitter do non alter the complexness of the algorithm ; algorithm V encodes and decodes in O ( cubic decimeter ) clip as does algorithm FGK.

Empirical RESULTS

Empirical trials of the efficiencies of the algorithms presented here are reported in [ Bentley et Al. 1986 ; Knuth 1985 ; Schwartz and Kallick 1964 ; Vitter 1987 ; Welch 1984 ] . These experiments compare the figure of spots per word required and processing clip is non reported. While theoretical considerations bound the public presentation of the assorted algorithms, experimental information is priceless in supplying extra penetration. It is clear that the public presentation of each of these methods is dependent upon the features of the beginning ensemble.

n K Static Alg. V Alg. FGK

100 96 83.0 71.1 82.4

500 96 83.0 80.8 83.5

961 97 83.5 82.3 83.7

Figure 5.1 — Simulation consequences for a little text file [ Vitter 1987 ] ;

n = file size in 8-bit bytes,

K = figure of distinguishable messages.

Vitter tests the public presentation of algorithms V and FGK against that of inactive Huffman cryptography. Each method is run on informations which includes Pascal beginning codification, the TeX beginning of the writer ‘s thesis, and electronic mail files [ Vitter 1987 ] . Figure 5.1 summarizes the consequences of the experiment for a little file of text. The public presentation of each algorithm is measured by the figure of spots in the coded ensemble and operating expense costs are non included. Compression achieved by each algorithm is represented by the size of the file it creates, given as a per centum of the original file size. Figure 5.2 nowadayss informations for Pascal beginning codification. For the TeX beginning, the alphabet consists of 128 single characters ; for the other two file types, no more than 97 characters appear. For each experiment, when the operating expense costs are taken into history, algorithm V outperforms inactive Huffman cryptography every bit long as the size of the message ensemble ( figure of characters ) is no more than 10^4. Algorithm FGK displays somewhat higher costs, but ne’er more than 100.4 % of the inactive algorithm.

n K Static Alg. V Alg. FGK

100 32 57.4 56.2 58.9

500 49 61.5 62.2 63.0

1000 57 61.3 61.8 62.4

10000 73 59.8 59.9 60.0

12067 78 59.6 59.8 59.9

Figure 5.2 — Simulation consequences for Pascal beginning codification [ Vitter 1987 ] ;

n = file size in bytes,

K = figure of distinguishable messages.

Witten et Al. compare adaptative arithmetic cryptography with adaptative Huffman coding [ Witten et al. 1987 ] . The version of arithmetic cryptography tested employs single-character adaptative frequences and is a mildly optimized C execution. Witten et Al. compare the consequences provided by this version of arithmetic cryptography with the consequences achieved by the UNIX compact plan ( compact is based on algorithm FGK ) . On three big files which typify informations compaction applications, compaction achieved by arithmetic cryptography is better than that provided by compact, but merely somewhat better ( mean file size is 98 % of the compacted size ) . A file over a three-character alphabet, with really skewed symbol chances, is encoded by arithmetic cryptography in less than one spot per character ; the ensuing file size is 74 % of the size of the file generated by compact. Witten et Al. besides report encoding and decrypting times. The encoding clip of arithmetic cryptography is by and large half of the clip required by the adaptative Huffman coding method. Decode clip norms 65 % of the clip required by compact. Merely in the instance of the skewed file are the clip statistics rather different. Arithmetical cryptography once more achieves faster encoding, 67 % of the clip required by compact. However, compact decodes more rapidly, utilizing merely 78 % of the clip of the arithmetic method.


Data compaction is still really much an active research country. This subdivision suggests possibilities for farther survey. Schemes for increasing the dependability of these codifications while incurring merely a moderate loss of efficiency would be of great value. This country appears to be mostly undiscovered. Possible attacks include implanting the full ensemble in an error-correcting codification or reserving one or more codewords to move as mistake flags. For adaptative methods it may be necessary for receiving system and transmitter to verify the current codification function sporadically. For adaptative Huffman cryptography, Gallager suggests an “ ripening ” strategy, whereby recent happenings of a character contribute more to its frequence count than do earlier happenings. This scheme introduces the impression of vicinity into the adaptative Huffman strategy. Cormack and Horspool describe an algorithm for come closing exponential aging [ Cormack and Horspool 1984 ] . However, the effectivity of this algorithm has non been established.

Both Knuth and Bentley et Al. suggest the possibility of utilizing the “ cache ” construct to work vicinity and minimise the consequence of anomalous beginning messages. Preliminary empirical consequences indicate that this may be helpful. A job related to the usage of a cache is overhead clip required for omission. Schemes for cut downing the cost of a omission could be considered. Another possible extension to algorithm BSTW is to look into other vicinity heuristics. Bentley et Al. turn out that intermittent-move-to-front ( move-to-front after every K happenings ) is every bit effectual as move-to-front. It should be noted that there are many other self-organizing methods yet to be considered. Horspool and Cormack describe experimental consequences which imply that the transpose heuristic performs every bit good as move-to-front, and suggest that it is besides easier to implement.


Data compaction is a subject of much importance and many applications. Methods of informations compaction have been studied for about four decennaries. This paper has provided an overview of informations compaction methods of general public-service corporation. The algorithms have been evaluated in footings of the sum of compaction they provide, algorithm efficiency, and susceptibleness to error. While algorithm efficiency and susceptibleness to mistake are comparatively independent of the features of the beginning ensemble, the sum of compaction achieved depends upon the features of the beginning to a great extent. Semantic dependant informations compaction techniques are special-purpose methods designed to work local redundancy or context information. A semantic dependent strategy can normally be viewed as a particular instance of one or more all-purpose algorithms. It should besides be noted that algorithm BSTW is a all-purpose technique which exploits vicinity of mention, a type of local redundancy.

Susceptibility to mistake is the chief drawback of each of the algorithms presented here. Although channel mistakes are more annihilating to adaptive algorithms than to inactive 1s, it is possible for an mistake to propagate without bound even in the inactive instance. Methods of restricting the consequence of an mistake on the effectivity of a information compaction algorithm should be investigated.

This essay was written by a fellow student. You may use it as a guide or sample for writing your own paper, but remember to cite it correctly. Don’t submit it as your own as it will be considered plagiarism.

Need a custom essay sample written specially to meet your requirements?

Choose skilled expert on your subject and get original paper with free plagiarism report

Order custom paper Without paying upfront

Data Compression Coding And Representation Computer Science. (2016, Dec 04). Retrieved from