It must be an equality. In the classical approach (which I believe your texts follow), as proposed by Pearson and Neyman long time ago, H0 must be stated as an equality (and it is fundamental assumption for how the maths here works), it is necessary to have parameter assumed at a particular value, to work out the test statistic distribution and probabilities.
The logic of hypothesis testing is basically a "what if" approach.
Assumptions/Framework: particular statistical model, sampling from a population, decision table (very simple) with error thresholds
Input (ingredients): data sample, H0 statement (about a parameter of the statistical model assumed, and that is about the unknown population)
Output : Decision
Steps (recipe, inference process) to arrive at the decision: 1/ Let as assume H0 is true, 2/ Let us compute some probabilities and check ... assuming H0 was true what is the probability of observing the data in front of us 3/ this is done via test statistic (assuming H0!), which is a random variable (randomness comes from random sampling) and has some distribution, some outcomes more likely some less, 3/ If H0 is true and the probability of observing what we have in front of us is below (personally set or industry standard) "suspicion" threshold (rejection level) we tend to think that something is wrong with H0 rather than with the data (i.e. not just bad luck of getting a very unlikely sample) and say "there is evidence" to reject H0. No hard proof but some "evidence".
The distribution of a test statistic has a parameter and that parameter appears in the H0 as a specified value (or in other types of hypotheses as specific, defined case) which we must have a specified value in order to calculate, or work out the distributions, probabilities.
A good example are games which include randomness or betting in general. If you play poker and observe a very, very unlikely outcome (e.g. one of the players winning twenty times in a row, each time with four aces) your start to think that something is wrong with the game (too good opponent? or maybe some cheating going on?), rather than that this winning has just happened by chance. That is exactly how Neyman was explaining hypothesis testing later in one of his books (I think it was in his "First course").
One might say that the whole logic above with equality in H0 is a little "dodgy". Why would we be specifying H0 at a particular ONE special point (or possibly to shoot at different points as H0s) and rejecting it (or not), when we know that we cannot PROVE or CHOOSE any special particular value as a sensible candidate for the initial guess (and all results will be stated in terms of probabilities anyway). Usually we are not interested really in checking that the mean of some measurement is exactly at particular ONE value, and if we can reject this or support by the data, but rather it would be good to know what are the probabilities of that mean (or whatever if of interest) being in certain ranges (then we can put some money on it, we have probabilities). That is obviously for another discussion. The workaround is "confidence intervals" but these in classical approach are derived fundamentally via the hypothesis testing as well and suffer from the same lets get some evidence via test-reject probabilistic logic (and what would happen if we were able to repeat it many times) ... unless one steps out to the B world.