2 comments

  • Jeebz 6 hours ago
    Hey HN,

    I'm one of the founders of Reavil.

    The Problem: As a PM, I spent years drowning in raw user feedback. We had plenty of data on where users were dropping off (analytics), but zero structured data on why. I found myself manually tagging rows in spreadsheets to find patterns, which wasn't scalable.

    The Solution: We built a lightweight embedded widget that captures feedback at friction points. We then pipe that unstructured text through an LLM to categorize it into actionable buckets: priority fixes, critical bugs, feature requests, etc.

    It basically turns qualitative "noise" into quantitative data without the manual grunt work saving product teams lot of time.

    We just launched our pilot today. I’d love to hear your thoughts on the implementation or how you’re currently handling feedback triage.

    Link:https://reavil.io

  • asphero 5 hours ago
    The friction point capture is interesting. In my experience, the hardest part isn't collecting feedback - it's getting users to actually leave it in the moment.

    How are you handling the timing of the widget popup? Too early = users haven't formed an opinion yet. Too late = they've already left.

    Also curious about the LLM categorization accuracy. Do you let users correct misclassifications to improve the model over time?