I like to define done by collaborating to create numerous precise, concise examples of the feature in action. Then automate the examples as automated tests.
The collaboration is done with "the three amigos," the product owner, one or more people who understand the technology (e.g. developers), and one or more people skilled at probing boundaries and detecting ambiguity (e.g. testers). The goal is for everyone to ask questions and pose scenarios to define the scope of the feature relatively comprehensively—what's in scope and what's out of scope. Collaborating helps to build a shared understanding.
As the group fleshes out its understanding of the feature, express that understanding with concise examples. This help to make the shared understanding concrete. I've had people tell me, "With these examples, I have a much better understanding of what we're all trying to accomplish. So instead of just working on a feature, now I actually care about it."
Automating the examples helps to make them more available and more visible to the whole team. And they become a marker of progress. Each time the system passes another test, that's a clear sign of progress, and it's progress that someone cares about.
There may be criteria that don't lend themselves well to concise examples and automation. So I also like to identify areas of risk or concern (again, collaboratively). Then, as soon as there is functionality to execute (even if it's not done yet), begin exploring the feature with exploratory test sessions.