1 comments

  • mparas 2 hours ago
    I'm a solo dev from India who moved to Berlin a few years ago. I built Sensonym because I was bored with every language app feeling the same, and I wanted to find out what happens when you make vocabulary learning physical.

    The idea is that tying vocabulary to physical actions makes it stick better. So I tried mapping every word to a physical action using phone sensors. Some examples:

    - To learn the word for "drink", you tilt your phone toward your mouth like a glass

    - To learn the word for "blow", you blow into the microphone

    - To learn the word for "listen", you bring the phone to your ear

    - To learn the word for "eat", you plug in your charger

    - To learn the word for "remember", you take a screenshot

    The app contains two modes: story mode, where sensor interactions and vocabulary are woven into the narrative, and training mode for quick single-word drills.

    Sensonym supports 10 languages and is live in Germany (iOS + Android), expanding soon. If you're outside Germany, you can sign up on the website to get notified when it launches in your region or contact me at [email protected] to get a beta test invite.

    I would love to hear your honest feedback. What do you think of the general approach (sensor interactions and stories)? Do the sensor-word mappings feel intuitive or forced? Any interaction ideas I'm missing?