PACTOR-II

Basic Technical Information

I. Introduction

II. Modulation Methods

III. Robustness

IV. Intermodulation Products and Crest Factor

V. Error Control Coding

COMPARISON Pactor II- Clover II

 

I. Introduction

PACTOR was designed more than five years ago in Germany, in order to overcome the known disadvantages of AMTOR and Packet Radio. PACTOR is a cheap and reliable means of fast, robust and error-free data transfer over short wave links, and which does not exceed the usual 500 Hz bandwidth limit for digital modes.
For the first time, not only the complete ASCII character set, but any given binary information could be transferred over short wave, even in very poor propagation conditions. Another aim of the system development was the utilizing of inexpensive and widespread hardware. Since 8-bit controllers without Digital Signal Processor chips (DSP's) were state of the art at that time, Frequency Shift Keying (FSK) had to be chosen as a modulation method. Up to now, PACTOR with analog Memory-ARQ is still the most robust digital mode used in Amateur Radio. It is also still the fastest FSK mode that fits into a 500 Hz channel.
These may be some of the reasons that have made PACTOR a standard, which is not only included in virtually all short wave modems in the Amateur Radio market, but is also widespread in the commercial world. In the meantime however, more powerful CPU's and DSP's have been developed. Processing power that greatly exceeded the financial limits of the average Radio Amateur a few years ago, has dropped to an acceptable price. Some of the current high-end modems for short wave operation already include a DSP, and in a few years you can expect all new modems to contain these chips. The through-put within a 500 Hz channel, as well as the robustness of a system can still be dramatically improved, by using modulation methods different to FSK, combined with powerful error control coding algorithms. A new standard that takes advantage of the forthcoming hardware generation is thus required.
These considerations have led to the development of PACTOR-II. This is not 'another new mode', but a fully backwards compatible improvement to the current PACTOR protocol, with automatic switching. As there are already several companies interested in licensing PACTOR-II, the German development group made sure that an implementation in units different from the original PTC-II will also be possible. However, using a less powerful hardware means sacrificing at least a considerable part of the weak signal performance of the system.

This is the first of a series of three parts that describes the PACTOR-II system and the ideas behind it. In this first chapter, we will just explain some important technical fundamentals that everybody has to be aware of, to understand the advantages of the new protocol. The second part describes the PTC-II hardware. The third part deals with details on the PACTOR-II protocol.

II. Modulation Methods

1. Frequency Shift Keying (FSK)

FSK was the first teletype modulation method used, and is still by far the most widespread one. The information is encoded using rectangular pulses, represented by an interleaved ON/OFF keying of two carrier frequencies. The symbol rate is defined as fraction 1/T, the so-called baud rate. T represents the symbol duration. In FSK systems, the typical wide spectrum that could be expected when modulating a rectangular baseband signal on a carrier, is cancelled out by avoiding phase discontinuities between successive pulses. Thus only the main lobes around the two carrier frequencies are dominant. The amplitude of a 200 baud signal at 500 Hz bandwidth is about 30 dB smaller than the amplitude in the center of the spectrum. If however, the baud rate is increased, the bandwidth of the main lobes will naturally also increase. A 300 Baud signal, for example, clearly exceeds the 500 Hz bandwidth, hence an ordinary CW filter cannot be applied without distorting the pulses. This greatly deteriorates the performance and thus also the Bit Error Rate (BER) of the system. Additionally, signals with higher baud rates suffer from a significant loss of immunity a- gainst time smearing (see below). For these reasons, 200 baud is commonly considered to be the maximum useful symbol rate of 2-tone FSK systems, operating over short wave links. Additionally, with regard to the over-crowded frequencies, all new systems should generally require not just a narrow bandwidth, but provide an improved spectral efficiency to obtain a higher throughput. We have therefore to look for a different approach, instead of using FSK.

2. Phase Shift Keying (PSK)

In PSK systems, the phase of a signal is used as a means of the information transfer. However, as the HF propagation conditions sometimes change quite rapidly, it is very difficult to track the absolute phase of a signal. Therefore it has proven to be much more efficient to utilize the phase difference between successive pulses. This requires a slightly higher Signal to Noise Ratio (SNR), but in return, the resistance against multi path effects is drama- tically increased. Modulation that employs phase differences between succes- sive pulses to encode the information is called Differential Phase Shift Keying (DPSK). If there are only two possible phase differences, signalling logical one and zero, the modulation is called Differential Binary Phase Shift Keying (DBPSK). If there are four possible phase differences, signalling logical dibits, it is called Differential Quadrature Phase Shift Keying (DQPSK). For example, with an one-tone 100 baud DQPSK signal, 200 bit/sec. can be transferred. If more phase differences are distinguished, the corresponding systems are called 8-DPSK, 16-DPSK, etc.
As phase shift keying naturally implies phase discontinuities between successive pulses, the spectrum of a DPSK system with hard keying shows the typical strong side-lobes of a rectangular pulse spectrum. The amplitude of a hard-keyed 100 baud DPSK signal is only around 15 dB smaller at a cut-off frequency of 500 Hz than in the center of the spectrum. Thus hard keying must not be used on short wave due to the resulting large bandwidth. To avoid this problem, the baseband signal of a PSK system must be specifically prepared before it gets as far as being modulated on the carrier. This is done by transforming the rectangular pulses containing the binary information into suitable wave forms, using special shaping algorithms. H. Nyquist has designed a pulse with very useful properties, the so-called raised-cosine pulse, which does not produce any spectral spill over. These pulses do not produce any inter symbol interference either, since their amplitude is zero at the sampling time of successive pulses. Thus they can be overlapped without any interference between them, even if the pulses are four times longer than the corresponding rectangular pulses.
This is the reason why a very high information density and a good spectral efficiency can be obtained using raised-cosine pulses. Since a raised-cosine DPSK signal with a symbol rate of 100 baud only occupies a bandwidth of around 200Hz at -45 dB, it is obvious that two of these signals can be placed together into a 500Hz channel. The resulting system is called two-tone-DPSK. It can robustly transfer 400 bit/sec. within a bandwidth of less than 500Hz, if DQPSK is ap- plied or up to 600 bit/sec. within the same bandwidth using 8-DPSK.

III. Robustness

The simplest test of the robustness of a modulation system is the measurement of its behaviour in presence of Arbitrary White Gaussian Noise (AWGN). DQPSK is known to be more robust than FSK, though it also has a better spectral effi- ciency. For example, to obtain a BER of 10E-4, the required SNR per bit is 10.7dB when using DQPSK, but 12.3dB when using FSK. DBPSK requires an even smaller SNR of 9.3dB in that case, and is thus the most robust mode mentioned. It is also important to remember that signals with many levels, e. g. 16-DPSK, require more energy per bit than DBPSK or DQPSK. Generally, a compromise has to be found between the symbol duration (i. e. the baud rate) and the number of bits that each symbol has to carry. Short symbols do require a greater bandwidth, but a high throughput can be achieved with only a few levels per symbol.
As a result, the signal is more robust against AWGN than a system with the same throughput using longer symbols and more levels. On the other hand, short symbols are very susceptible to time smearing (see below) and require a higher bandwidth. DQPSK with 100 baud has proven to be a very good compromise between robustness against AWGN and time dispersion, especially if it is combined with powerful error control coding.

Another very important point for a short wave communications system is the resistance against multi path effects, which occur if there is more than one path between transmitter and receiver. Due to the various delays at the receiving end, the combination of the different received signals does not result in a copy of the original signal, but in a more or less distorted wave form. Three major multi path effects can be distinguished: time dispersion or 'time smearing', frequency dispersion and selective fading. These three effects are closely related to each other. They are strong if the frequency used is much below the maximum usable frequency, and if the distance is long. A single hop path on the 20m band, for example, normally does not suffer from severe multi-path effects. However, a DX link on the 80m band at night often provides considerably strong multi path problems.
The short term time jitter has a magnitude of up to 5 msec. Larger time smearing can only be observed under very special conditions of the ionosphere. A baud rate of 100 symbols per second has proven to be low enough for almost all possible propagation conditions, especially if powerful error control coding is applied.
Frequency dispersion means that the frequency of the original signal is shifted on the path between transmitter and receiver. It is the same effect as the so- called Doppler shift, which can be observed on signals from low orbit satel- lites. The magnitude of the Doppler shift on normal short wave paths is only a few Hertz, thus it does not influence ordinary FSK systems. However, the demodulator of a PSK signal needs a very accurate information on the carrier frequency in order to work properly. A DQPSK system with a symbol rate of 100 bit/sec. can only deliver a correct output, if the frequency error is less than +/- 12.5Hz. Automatic frequency tracking must therefore be applied, which can easily be done on a DSP. The PACTOR-II signal, for example, can automatically be tracked by the PTC-II within an offset range of +/- 100 Hz. Longer symbols and more levels of a DPSK signal require a much higher frequency accuracy. For ex ample, a 32 baud 16-DPSK signal, as used in CLOVER-II, needs an accuracy of better than 1Hz. Thus even small Doppler shifts deteriorate the demodulation process, because it is not possible to track the frequency fast enough at such a high accuracy.
Selective fading, the third multi path effect, mainly influences FSK systems, as the channel with lower SNR determines the BER of the whole system, if the converter cannot switch to space-only mode. The symmetrical property of a binary FSK signal is destroyed by selective fading. PSK modulation is quite ro bust against this effect.

IV. Intermodulation Products and Crest Factor

Whenever a signal, consisting of two or more parallel carriers or 'tones', has to pass through a non-linear stage, intermodulation products are generated. Special attention has to be payed to the third order products, because they are located relatively close to the original signal components. The final RF power stage(s) of the average Amateur Radio transmitter, are capable of a third order intermodulation performance of about -30dB. A two-tone signal with a shift of 200Hz thus produces third order intermodulation products that are located virtually within the original bandwidth of the signal. This means that there will not be any interference on adjacent channels due to intermodulation effects. However, a signal consisting of four tones that are spaced at 125Hz, will be broadened to around 1100Hz of bandwidth at -30dB when passing through the same stage.

Another item that has to be considered is the Crest Factor, which is defined as the ratio of maximum signal amplitude to average signal amplitude. Modulation systems designed for radio frequencies should always have a low Crest Factor, so that the peaks of the signal wave form do not over-drive the final RF sta- ges. Among other considerations, the Crest Factor is influenced by the number of tones used by a system. The more tones that are used, the more difficult it is to get a low Crest Factor. It is in addition, also influenced by the modula- tion method. PSK, for example, leads to a lower Crest Factor than Amplitude Shift Keying (ASK).

V. Error Control Coding

The use of a reliable modulation system is only one of the essential steps towards the goal of optimum data transmission over the difficult paths encountered on HF radio. Dramatic improvements can additionally be obtained by correct preparation of the data before it gets as far as being transmitted by the modem. To be effective, this process, known as Error Control Coding (ECC), imposes very high computing demands on the system processor. Actually the final limit of achievable transmission reliability depends solely on the processing power used for the ECC. The more power available, the closer the theoretical throughput limit, the so-called Shannon Boundary, can be approached.

The basic idea behind this coding is quite simple: A certain number of redun- dant bits is appended to the original information that has to be transmitted through a noisy channel. The redundant bits are generated from the original data by applying special rules, depending on the chosen code. Data and redun- dancy then form a new string of bits, which is called a code word. The ratio of the number of information bits and the whole length of the code word is called code rate. The number of valid code words is obviously less than the number of possible code words. A good code and the corresponding encoding process must produce only those valid code words which have a maximum mutual hamming distance, that means a maximum number of different bits. This maximum mutual distance then allows code words to be recognized and distinguished, even in the presence of received errors. For example, if the valid code words of a specific code have a minimum mutual hamming distance of three, each code word containing a single error can be corrected, as the only valid code word with the greatest similarity then represents the correct one.

Two main approaches of ECC can be distinguished: block codes and convolutional codes. Both always require data interleaving to be effective on channels with burst errors. When applying block codes, the message or packet is devided into short blocks containing only a few bits. Each short block is then encoded se- parately and forms a relatively short code word. Popular block codes are the Golay code, the Hamming codes and the Reed-Solomon code. Block codes can easily be implemented as they often show a cyclic property and thus do not require much processing power. However, they have proven to be relatively weak, especially if the BER is high. They are only able to correct a few errors in each code word and thus do not provide any benefits in very noisy channels or poor propagation conditions. Additionally, it is very difficult to utilize soft de- cision when using block codes. Soft decision means that the decoder does not only use binary decisions for the error finding process, but also the analog values provided from the demodulator section. It works similar to the analog Memory-ARQ of PACTOR and requires an ADC or DSP.
If convolutional codes are applied, the entire message or packet is encoded and the resulting code words are longer than the original packet. These codes are very powerful, and their efficiency is only limited by processing speed. The complexity of a convolutional code mainly depends on the length of the tapped shift registers, which work as binary convolver and represent the heart of the convolutional encoder. This specific number is called constraint length. It provides an upper boundary of the coding gain that can be achieved by a con- volutional code. Several methods have been developed for the decoding process of these codes. The optimum decoder, which allows maximum likelihood decoding, is called the Viterbi-Decoder. Unfortunately, the relationship between con- straint length and complexity of a Viterbi-Decoder is not a linear one, but it increases exponentially. Real-time applications of Viterbi-Decoders were limited to quite short constraint lengths for a long time due to slow processor speeds. Nowadays, using the new generation of DSP's, it has become possible to apply constraint length 9 or even higher convolutional codes. As a major advantage of convolutional codes and the Viterbi-Decoder, soft decision may be implemented and only slightly increases the complexity of the system. PACTOR-II, as implemented in the original German PTC-II, applies a convolutional code with contraint length 9 and soft decision.

© 1996 SCS GmbH

 

COMPARISON:

PACTOR-II

CLOVER-II

FEATURE:

PACTOR-II / PTC-II

CLOVER-II / P-38, PCI-4000/M

Robustness against Noise and Interference:

VERY HIGH, links possible down to -18 dB (inaudible signals). ADVANCED MEMORY-ARQ.

Links only possible with relatively strong and stable signals. NO MEMORY-ARQ.

Speed:

Up to 1200 Bit/sec, even with medium quality signals.
Low protocol overhead. Fast and spontaneous half duplex operation. Great chat mode.

Max. 750 Bit/sec, if signals are VERY strong and stable. P-38 restricted to max. 375 Bit/sec.
Relatively high protocol overhead due to full duplex simulation

Freqency Accuracy:

Offsets of initially +-80 Hz are tolerated and automatically compensated. Automatic and fast/robust drift compensation.

Accuracy of better +-15 Hz required. NO AUTOMATIC DRIFT COMPENSATION.

Occupied Bandwith:

450 Hz at -50 dB. 300 Hz at -6 dB. (Signal completely "fits" into a common "500 Hz" CW filter bandwidth.)

550 Hz at -50 dB. 440 Hz at -6 dB. (A CLOVER-II signal does NOT "fit" into an ordinary "500 Hz" CW filter band-width due to non-constant group delays of those filters.)

Tuning Display:

Very accurate and robust, even on FSK-Modes. 15 multi-color LED's.

Poor on PACTOR-I and RTTY. Requires relatively high signal/noise ratio on CLOVER.

Status Display:

8 multi-color / 8 single-color LEDS, for link state information, etc. 10 digit alpha numerical DOT MATRIX LED DISPLAY.

NO DISPLAY

Crest Factor:

Better 1.4 High power efficiency.

Better 2.0

Data Compression:

Self-adaptive ONLINE Data Compression. (Pseudo Markow/Huffman)

NO ONLINE DATA COMPRESSION.

Multimode:

PACTOR-II/PACTOR-I/ AMTOR/RTTY/CW/AUDIO DENOISER/ 2 PACKET RADIO PORTS, full multitasking. (FAX & SSTV will follow as updates for free.) Dual Gateway HF/VHF/UHF: PACTOR/AMTOR<->PR/PR. PR: 1200/2400/9600 Bd, up to 3 MBd (!) possible for special (commercial) applications. (The PTC-II is a triple port unit.)

CLOVER/PACTOR-I/ AMTOR/RTTY/ASCII.

Common STBY-Mode:

Fast automatic switching between PACTOR-I, PACTOR-II, AMTOR and FEC. Monitor Mode for PACTOR-II and PACTOR-I also active while waiting for connects.

NO AUTOMATIC MODE SWITCHING. New firmware must be uploaded to the cards by the user in order to change the actual mode.

Mailbox:

Comfortable stand-alone mailbox, accessable from all ports including PR. Up to 2 Megabyte Mailbox-Memory.

NO BUILT-IN MAILBOX. (Some mailbox programs on PC's are available.)

TRX-Remote Control:

Full remote control and scanning support for Kenwood, Icom, Yaesu, SGC. (Comfortable stand-alone scanning mailbox.)

NO TRX PORT.

Tones:

Adjustable in 1 Hz steps. Automatic USB/LSB detection. System is sideband independant.

4 pre-defined settings. LSB must be used.

RF-shielding:

6-layer construction and compact SMD circuit. 28 LCL-Filters for RF-decoupling. Metal case.

PC plug in card.

Chip Set:

Latest DSP and RISC technology featured.

Relatively cheap and slow standard chips.

© 1996 SCS GmbH

 

Here some information on the difference between the memory ARQ of the PACTOR-Controller (PTC) and other PACTOR software without memory ARQ or with so called 'digital memory ARQ'.

In conventional ARQ systems the TX has to repeat a packet until the whole information or at least the bit pattern of major parts of it has been received completely error free. It is evident that the probability of receiving a packet correctly dramatically decreases with lower S/N ratio (SNR). Some ways to maintain a contact in that case is to shorten the packet length or to apply error correcting codes which in turn will greatly reduce maximum traffic speeds when conditions are good. Another method, known as 'intelligent reconstruction' (sometimes used in AMTOR systems, since real memory ARQ cannot be used in AMTOR due to the missing CRC), combines error free received parts of different transmissions of the same data packet in various ways checking if the result passes the corresponding redundancy test. A similar method, the so called 'digital memory ARQ' reconstructs information by digital addition of several packets. However, a digital converter is only able to emulate an 1-bit ADC, thus meaning that there is only a small chance of increasing speed in poor propagations. If the SNR is falling short of a certain 'threshold' and only a few bits are received in a correct pattern with every transmission period, the line can hardly be improved this way, even with a lot of repetitions.

Real memory ARQ always requires an ADC. In the PACTOR-Controller (PTC) as well as in the version of PacComm, samples are taken from the FSK-demodulator low-pass-filter output with the aid of an 8-bit AD-converter, that means the information whether a signal is e.g. 100mV or only 1mV over the 'converter threshold' is not lost, like in digital systems, but used to reconstruct the data. Assuming white Gaussian noise this accumulation method will maintain a HF line at a lower SNR than any digital system, because the whole bit pattern of a packet can be obtained using the information of several transmission periods, no correct received parts are needed. Furthermore, since shift levels are toggled with every transmission, even constant interfering signals within the receiver passband will not affect the mean value. Besides that the ADC can be used to emulate adaptive filters and therefore save additional hardware.

The PACTOR protocol is especially designed to support memory ARQ (e.g. the packet header is inverted with every new information packet to prevent accumulation of old requested packets). So everybody who implements PACTOR has to use an ADC in order to keep the high standard PACTOR .

After watching some QSOs of OM running software without an ADC and realizing that there has been mostly very slow traffic with 100 baud and many repetitions in quite good conditions, I am afraid that there are some major bugs in the phasing correction and speed adaptation software besides the missing memory ARQ. This may be prejudicial to the PACTOR system in general.

I hope this information answered some questions sufficiently,

best 73 de Tom, DL2FAK

© 1996 SCS GmbH