In classification in general, the hypothesis class is the set of possible classification functions you're considering; the learning algorithm picks a function from the hypothesis class.
For a decision tree learner, the hypothesis class would just be the set of all possible decision trees.
For a primal SVM, this is the set of functions
$$\mathsf H_d =\left\{ f(x) = \operatorname{sign}\left( w^T x + b \right) \mid w \in \mathbb R^d, b \in \mathbb R \right\}.$$
The SVM learning process involves choosing a $w$ and $b$, i.e. choosing a function from this class.
For a kernelized SVM, we have some feature function $\varphi : \mathcal X \to \mathcal H$ corresponding to the kernel by $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$; here the hypothesis class becomes
$$\mathsf H_k = \{ f(x) = \operatorname{sign}\left( \langle w, \varphi(x) \rangle_{\mathcal H} + b \right) \mid w \in \mathcal H, b \in \mathbb R \}.$$
Now, since $\mathcal H$ is often infinite-dimensional, we don't want to explicitly represent a $w \in \mathcal H$. But the representer theorem tells us that the $w$ which optimizes our SVM loss for a given training set $X = \{ x_i \}_{i=1}^n$ will be of the form $w = \sum_{i=1}^n \alpha_i \varphi(x_i)$. Noting that $$
\langle w, \varphi(x) \rangle_{\mathcal H}
= \left\langle \sum_{i=1}^n \alpha_i \varphi(x_i), \varphi(x) \right\rangle_{\mathcal H}
= \sum_{i=1}^n \alpha_i \left\langle \varphi(x_i), \varphi(x) \right\rangle_{\mathcal H}
= \sum_{i=1}^n \alpha_i k(x_i, x),$$
we can thus consider only the restricted set of functions
$$\mathsf H_k^X = \left\{ f(x) = \operatorname{sign}\left( \sum_{i=1}^n \alpha_i k(x_i, x) + b \right) \mid \alpha \in \mathbb R^n, b \in \mathbb R \right\}.$$
Note that $\mathsf H_k^X \subset \mathsf H_k$, but we know that the hypothesis the SVM algorithm would pick from $\mathsf H_k$ is in $\mathsf H_k^X$, so that's okay.
The support vectors specifically are the points with $\alpha_i \ne 0$. Which points are support vectors or not depends on the regularization constant and so on, so I wouldn't necessarily say that they're integrally related to the hypothesis class; but the set of possible support vectors, i.e. the training set $X$, of course defines $\mathsf H_k^X$ (along with the kernel $k$).