The scope of this question is perhaps too broad, because it does not specify quantitative targets for compression, it does not quantify $n$, it is unclear whether lossy compression is ok (and if so, how much loss is tolerable), nor does it provide any helpful characterization of the sequences in question: these are the considerations that can lead to good compression solutions.
Nevertheless, some opportunities may be available in particular cases.
Small $n$
As the question is written, exactly three series are selected for summing. If this is always the case, we can compress the data simply by precomputing the max and min for every possible one of the $\binom{n+2}{3} = n(n+1)(n+2)/3!$ choices. This will require less storage (and will be far faster in execution) provided $n \le 105$. The algorithm is
Precomputation:
For p=n to 1 by -1 {For q=n to p by -1 {For r=n to q by -1 {
Compute the min and max of S[p](i)+S[q](i)+S[r](i), 1 <= i <= 3600.
Let k = Binomial[n-p+2,3] + Binomial[n-q+1,2] + n-r + 1.
Store (min, max) at words 2*k-1 and 2*k, respectively.
}}}
Execution:
Input p,q,r
Assume, after an initial sort, that p <= q <= r.
Let k = Binomial[n-p+2,3] + Binomial[n-q+1,2] + n-r + 1.
Return the values at words 2*k-1 and 2*k as (min, max).
For example, with $n=15$ the compression is $97.5$% because only $2*15*16*17/6 = 1360$ words are needed instead of $15*3600 = 54000$.
With a trick we can handle the option to sum just two or even one sequence rather than exactly three of them: include one more sequence, of zeros, in the data structure. This increases $n$ by $1$; we now need that $n$ not exceed $104$.
Slowly varying sequences
Store $S_p(1)$ and $(S_p(i+1)-S_p(i))$ for each $p$ and $1 \le i \lt 3600$ as unsigned bytes. Reserve a few values (such as -128, -127, -126) to signal that the value actually requires more bytes (e.g., -128 means two bytes are needed, -127 means three, and -126 means four) (and store that value in the immediately following bytes). An index of the beginnings of the sequences will also be needed, because they will have variable storage requirements. The encoding and decoding algorithms are straightforward.
When most differences are between -125 and +127, the average compression is almost 75%.
General
It is unstated whether the compression is to save offline storage or due to limited storage of data structures during execution. If it's the former, apply any good general-purpose compression algorithm. If it's the latter, the decompression has to be part of the execution code. As suggested in the reply by @John Doucette, a Huffman code is easy to implement. It's reasonably fast; the main requirement is that the source language should support efficient bit-twiddling and allow bit-level storage and retrieval of data. This should be applied on top of the methods suggested above. In the second case (storing differences), the Huffman code will capitalize on non-uniform frequencies of the bytes. Compression rates can be good but are rarely dramatic (because every value requires at least one bit of storage). Another attractive method (to be applied after one of the previous methods yet before Huffman coding) is a run length encoding, which would be appropriate when many of the sequences contain moderate to long stretches of constant values or constant differences.