Contents


How it works

General

  • Load samples from WAV.

  • Resample to a intermediate sample rate: 20800Hz.

    • Get L (interpolation factor) and M (decimation factor) by looking at the greatest common divisor (GCD) between input and output sample rates.

    • Get interpolating lowpass filter inpulse response by window method.

      • Get kaiser window.

      • Sample and window the function sin(n*cutout)/(n*pi).

    • Do the equivalent of:

      • Insertion of L-1 zeros between samples.

      • Filter by doing the convolution with the impulse response.

      • Decimate by M.

  • Demodulate AM signal to get the APT signal.

    • Iterate over samples, get amplitude by looking at current and previous sample, see below.
  • Find the position of the sync frames of the APT signal (the white and black stripes that you can see in the final image).

    • Calculate the cross correlation between a hardcoded sync frame and the APT signal.

    • The peaks of that cross correlation show the locations of the sync frames in the APT signal.

  • Map the values of the signal to numbers between 0 and 255.

  • Generate the final image starting a new line on every sync frame.

About frequency

Frequency unit comparison

I made this drawing because I keep forgetting how to represent frequencies. Here you can see the frequency spectrum of some APT signal sampled at 11025Hz, the peak is the AM carrier at 2400Hz, and everything wraps around because we use discrete signals.

Resampling algorithm

I did something like what you can see here but with a easier (and slower) implementation.

Resampling algorithm

For each output sample, we calculate the sum of the products between input samples and filter coefficients.

AM demodulation

Previously I used a Hilbert filter to get the analytic signal, because the absolute value of the analytic signal is the modulated signal.

Then I found a very fast demodulator implemented on pietern/apt137. For each output sample, you only need the current input sample, the previous one and the carrier frequency:

AM demodulation formula

I couldn’t find the theory behind that method, looks similar to I/Q demodulation. I was able to reach that final expression (which is used by pietern/apt137) by hand and I wrote the steps on extra/demodulation.pdf. I think it only works if the input AM signal is oversampled, maybe that’s why I can’t find anything about it on the web.

Syncing

I do a cross correlation between the received signal against a sample sync frame (Channel A sync). As a sample sync frame I use the sequence:

.: Black
W: White

[..WW..WW..WW..WW..WW..WW..WW........]

Here you can see some examples.

Sync cross correlation

The first signal is the APT signal after AM demodulation, the next three are results of the cross correlation using slightly different samples, they should have peaks where the sync frames are in the original image.

The first one uses on the sample -1 for black and 1 for white. The second one is wrong, uses 0 for black and 1 for white. The third has the same sample as the first one but removes the DC component of the APT signal before doing the cross correlation.

Here you can see a close up of a peak.

Sync cross correlation peak

Telemetry

Read first below what does the telemetry mean.

The hardest part is to determine where the telemetry wedges are located. Horizontally, the telemetry bands are always on the same place. By looking at a decoded image I get:

  • Telemetry A position: 994 pixels
  • Telemetry B position: 2034 pixels
  • Both have a width of 44 pixels

Now the problem is to know where the telemetry starts vertically, I thought of doing some kind of edge detection but then I realized that we can take advantage of the fact that wedges 1 to 9 are always the same. So I use the following sample:

Telemetry bands

The variable part contains temperatures and a channel identifier. To determine the start of the telemetry frame I cross correlate the received telemetry band against that sample (excluding wedges 10 to 16). The peaks of the cross correlation show where the frames start.

To avoid noise I do the following:

  • As a sample I use frames 1 to 9 two times.

  • Each telemetry band has a width of 44 pixels, so I calculate the mean of these 44 pixels. Wedges 1 to 14 are the same in both A and B, so we use the mean of 88 pixels.

  • To measure noise I calculate the variance of the same 88 pixels I use to calculate the mean.

  • There is more than one telemetry frame on each image, I want to select the best one (less noise). So I divide the cross correlation against the variance to get a quality estimation (actually I’m using the standard deviation instead of variance). The remaining peaks are both starts of frames and low noise frames.

The following image shows an example:

Telemetry bands

I can also determine the values of the wedges without knowing where they are located vertically, by looking at the values of the pixels on the telemetry band and clustering (e.g. k-means). You can check that doing an histogram on GIMP of the telemetry band:

Telemetry band histogram on GIMP

Useful for contrast adjustment but I’m not doing this because I can’t know the value of wedge 16 (channel ID).

About APT images

Modulation

  • The signal is modulated first on AM and then on FM.

  • FM frequencies:

    • NOAA 15: 137.62MHz.

    • NOAA 18: 137.9125MHz.

    • NOAA 19: 137.1MHz.

  • AM carrier: 2400Hz.

Pixels

  • 8 bits/pixel.

  • The signal amplitude represents the brightness of each pixel.

  • Two lines per second, 4160 pixels per second.

  • 2080 pixels per line, each channel has 909 useful pixels per line.

Format

There are different imaging sensors, named as Channel 1, 2, 3A, 3B, 4 and 5. Two of them are chosen by the satellite operators to be shown on two portions of the image, named Channel A and B.

  • Channel A: On daylight generally has images from the Channel 2 sensor (almost visible light), at night generally has Channel 3A (infrared).

  • Channel B: Generally always has Channel 4 (infrared).

Each line has:

  • Channel A sync: 1040Hz square wave, seven cycles. This train of pulses appears in the image as seven vertical black and white stripes, has slightly narrower white stripes than the Channel B sync.

  • Channel A space: Scan of deep space with the Channel A sensor, on the image looks like a wide black or white vertical bar, see below why. Once each minute, the spacecraft clock inserts minute markers into this portion of the image. These minute markers appear as thin, black or white horizontal lines to provide a 60 second time reference in the image.

  • Channel A image: Earth as seen from the Channel A sensor.

  • Channel A telemetry: Looks like gray-scale horizontal bars. See below.

  • Channel B sync: Seven pulses, the frequency is 832 pulses per second. This train of pulses appears in the image as seven vertical black and white stripes, has slightly wider white stripes than the Channel A sync.

  • Channel B space: Scan of deep space with the Channel B sensor, on the image looks like a wide black or white vertical bar, see below why. Once each minute, the spacecraft clock inserts minute markers into this portion of the image. These minute markers appear as thin, black or white horizontal lines to provide a 60 second time reference in the image.

  • Channel B image: Earth as seen from the Channel B sensor.

  • Channel A telemetry: Looks like gray-scale horizontal bars. See below.

The Channel A sync frame and the Channel B sync make up the distinctive “tick-tock” sound on the received APT audio signal.

Both Channel A space and Channel B space have a scan of deep space using the same sensor used to image earth. If the sensor is sensible to visible light this part looks black (Channel 2 and I guess Channel 1 too). The infrared channels represent cold as white, so this part of the image looks white instead (Channel 3A, 4 and I guess that 3B and 5 too).

The telemetry bands have calibration data and information about the current channel being transmitted. Bars/wedges 1 to 9 are used for contrast adjustment and are fixed, wedges 10 to 16 can change because they show temperatures used for infrared images calibration.

Telemetry wedges

  1. Value: 31/255.

  2. Value: 63/255.

  3. Value: 95/255.

  4. Value: 127/255.

  5. Value: 159/255.

  6. Value: 191/255.

  7. Value: 224/255.

  8. Value: 255/255.

  9. Value: 0/255.

  10. Black body radiator termometer #1.

  11. Black body radiator termometer #2.

  12. Black body radiator termometer #3.

  13. Black body radiator termometer #4.

  14. Patch temperature.

  15. Back scan: Temperature of the black body radiator observed by the imaging sensor

  16. Channel identification: When compared to the values of the wedges 1 to 9 you can determine which channel is being used. Both this wedge and the back scan wedge is different on Channel A and B. See the list of channels below.

Channels

Channel A is the left part of the image, channel B is the right one. Each one can show any of the following sensors (also called channels).

  • 1: Visible. Equivalent wedge: 1. Wavelength: 0.58µm - 0.68µm. Daytime cloud and surface mapping.

  • 2: Near-infrared. Equivalent wedge: 2. Wavelength: 0.725 - 1.00µm. Land-water boundaries.

  • 3A: Infrared. Equivalent wedge: 3. Wavelength: 1.58µm - 1.64µm. Snow and ice detection.

  • 3B: Infrared. Equivalent wedge: 6. Wavelength: 3.55µm - 3.93µm. Night cloud mapping, sea surface temperature.

  • 4: Infrared. Equivalent wedge: 4. Wavelength: 10.30µm - 11.30µm. Night cloud mapping, sea surface temperature.

  • 5: Infrared. Equivalent wedge: 5. Wavelength: 11.50µm - 12.50µm. Sea surface temperature.

References