There are multiple ways to determine relative feature importance but as far as I know your approach might already yield the best possible results in terms of insight!
AdaBoost's feature importance is derived from the feature importance provided by its base classifier. Assuming you use a Decision Tree as a base classifier, then the AdaBoost feature importance is determined by the average feature importance provided by each Decision Tree.
This is quite similar to the common practice of using a forest of tree's to determine feature importance as explained here. It makes use of the fact that features found at the top of the tree contribute to the final prediction decision of a larger fraction of input samples and this expected fraction can therefore be used to estimate the relative importance of a feature.
The difference between the AdaBoost and for example a Random Forest (forest of tree's) that might influence determination of feature importance is in how they produce variants of the base classifier. The former produces variants with an increased focus on the "difficult" examples and the latter produces variants by introducing randomness in the tree building process. Which of these will yield a more "accurate" feature importance is unclear to me and is probably hard to say.
On the side, this answers elaborates on the actual implementation of determining feature importance for an ensemble of trees (source code) in Scikit-learn:
In scikit-learn, we implement the importance as described in [Breiman, Friedman, "Classification and regression trees", 1984.] (often cited, but unfortunately rarely read...). It is sometimes called "gini importance" or "mean decrease impurity" and is defined as the total decrease in node impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples reaching that node)) averaged over all trees of the ensemble.
For AdaBoost, in the above it would then be a weigthed average over the trees (source code)
To conclude, your initial approach to feature importance with AdaBoost might already give a good enough insight into the relative feature "strength" compared to other common approaches.