Separating words in a sentence, at least when done by computer, is called segmentation (分かち書き) or tokenization (トークン化).
When using an IME to input Japanese, when you hit the space bar to convert the kana to kanji, the IME has to segment whatever it is that you typed, then use a dictionary to replace the kana with its kanji.
As you've probably learned by now, the IME is not 100% accurate. As for general tokenization rules, it depends on what you're tokenizing and what you define as a word or token.
For example, Kuromoji, an open source morphological parser, will segment Earthliŋ's sentence above this way:
住宅 | 地域 | における | 本 | 機 | の | 使用 | は | 有害 | な | 電波 | 妨害 | を | 引き起こす | こと | が | あり | 、 | その | 場合 | ユーザー | は | 自己 | 負担 | で | 電波 | 妨害 | の | 問題 | を | 解決 |し | なけれ | ば |なり | ませ | ん | 。
Even though Earthliŋ parsed the last word しなければなりません as a single token, Kuromoji parsed it as 6 tokens. How you parse depends on what information you want to extract. Kuromoji uses a dictionary to aid the parsing, but another parser, TinySegmenter, is pattern based and also does a very good job:
住宅 | 地域 | に | おける | 本機 | の | 使用 | は | 有害 | な | 電波 | 妨害 | を | 引き起こす | こと | が | あり | 、 | その | 場合 | ユーザー | は | 自己 | 負担 | で | 電波 | 妨害 | の | 問題 | を | 解決 | し | なけれ | ば | なり | ませ | ん | 。
As you can see, there are a few differences between Kuromoji and TinySegmenter (both of these you can use right in the browser).
Although no person can tokenize the sentence in the manner these small programs do, it should come naturally and unconsciously as you learn Japanese.
If short on time however, a very crude way to tokenize is to just group characters contiguously by script (hiragana, katakana, kanji):
住宅地域 | における | 本機 | の | 使用 | は | 有害 | な | 電波妨害 | を | 引 | き | 起 | こすことがあり | 、 | その | 場合 | ユーザー | は | 自己負担 | で | 電波妨害 | の | 問題 | を | 解決 | しなければなりません | 。 |
The way Earthliŋ parsed the sentence above was in fact still vocabulary intensive. For example, in order to segment 引き起こすことが
into 引き起こす | こと | が
(and not, say, 引き起 | こすこ | とが
) he probably first started with the fact that こと is a nominalizer and so is a token. Next, delimiting the token こと then yields all 3 tokens 引き起こす | こと | が
. If he didn't know beforehand that こと is a nominalizer he would not have known that it was a token and so he would not have been able to choose between the parsings 引き起こす | こと | が
(correct) and 引き起 | こすこ | とが
(incorrect).
Well, I'm just guessing. Obviously I don't actually know what went on inside the head of Earthliŋ, but I wanted to draw attention to the fact that when it comes to segmenting, having a vocabulary (or a dictionary if you're a computer) results in different segmentation strategies.