First, go here and read the whole exchange.
Then, consider that in orthodox hypothesis testing, we never accept a hypothesis - such as the hypothesis of equal variances - we only fail to reject such a hypothesis. Your situation is actually a good example for why this is so. Remember that in the single case, the p value is best seen a gradual measure of the probability of the data conditional on H0 being true.
In this specific situation, the p value tells you that if drawing samples from two groups with the same parameter, only in about 8% of cases would the difference in the parameter be more extreme than in your situation. In other words, the value in question is quite different in this sample; however, your sample is too small to justify the confident rejection of the hypothesis that they were still drawn from populations where the parameter is identical.
In other words, had your sample been only a little larger, a difference this big would have justified a rejection of the null hypothesis. As they say, surely God loves the p=0.049 as much as the 0.051; and p = 0.08 is hardly evidence in favour of H0.
So while your test does not justify the rejection of H0 at 95% confidence (or 5% alpha rate), it is far from good positive evidence for it. In fact, the F test is known to be not especially good at detecting that which the t test is vulnerable to in the first place! On the other hand, the t test is known to be highly robust to deviations from equal variances.
However, this holds only while sample sizes are equal. In your case, they are highly unequal, so at a p value of .08, indicating little confidence in the H0 (regardless of any arbitrary alpha level), I would be somewhat concerned.
So if you want to make sure, you have two things ahead of you. First, visualise the sample distributions by inspecting QQplots, histograms and/or similar methods. Then, perform a test more robust to differences in variances, such as Welch's t test, and see if its result is in disagreement with the t test. If the samples appear reasonably similar, and the two tests deliver similar results, you're good to go. If not, well, you already got the robust test calculated, haven't you?