Show HN: Semantic Calculator (king-man+woman=?)

(calc.datova.ai)

176 points | by nxa 17 days ago

72 comments

  • godelski 17 days ago

      data + plural = number
      data - plural = research
      king - crown = (didn't work... crown gets circled in red)
      king - princess = emperor
      king - queen = kingdom
      queen - king = worker
      king + queen = queen + king = kingdom
      boy + age = (didn't work... boy gets circled in red)
      man - age = woman
      woman - age = newswoman
      woman + age = adult female body (tied with man)
      girl + age = female child
      girl + old = female child
    
    The other suggestions are pretty similar to the results I got in most cases. But I think this helps illustrate the curse of dimensionality (i.e. distances are ill-defined in high dimensional spaces). This is still quite an unsolved problem and seems a pretty critical one to resolve that doesn't get enough attention.
    • n2d4 17 days ago
      For fun, I pasted these into ChatGPT o4-mini-high and asked it for an opinion:

         data + plural    = datasets
         data - plural    = datum
         king - crown     = ruler
         king - princess  = man
         king - queen     = prince
         queen - king     = woman
         king + queen     = royalty
         boy + age        = man
         man - age        = boy
         woman - age      = girl
         woman + age      = elderly woman
         girl + age       = woman
         girl + old       = grandmother
      
      
      The results are surprisingly good, I don't think I could've done better as a human. But keep in mind that this doesn't do embedding math like OP! Although it does show how generic LLMs can solve some tasks better than traditional NLP.

      The prompt I used:

      > Remember those "semantic calculators" with AI embeddings? Like "king - man + woman = queen"? Pretend you're a semantic calculator, and give me the results for the following:

      • franga2000 17 days ago
        This is an LLM approximating a semantic calculator, based solely on trained-in knowledge of what that is and probably a good amount of sample output, yet somehow beating the results of a "real" semantic calculator. That's crazy!

        The more I think about it the less surprised I am, but my initial thoughts were quite simply "now way" - surely an approximation of an NLP model made by another NLP model can't beat the original, but the LLM training process (and data volume) is just so much more powerful I guess...

        • CamperBob2 17 days ago
          This is basically the whole idea behind the transformer. Attention is much more powerful than embedding alone.
          • godelski 17 days ago
            The transformers are initialized by embedding models...

            Your embedding model is literally the translation layer converting the text to numbers. The transformers are the main processing unit of the embeddings. You can even see some self-reflection in the model as the transformer is composed of attention and a MLP sub-network. The attention mechanism generates the interrelational dependence of the data and the MLP projects up into a higher dimension before coming down so that this can untangle these relationships. But the idea is that you just repeat this process over and over. The attention mechanism has the benefit over CNN models because it has a larger receptive field, so can better process long range relationships (long range being across the input data) where CNNs bias for local relationships.

      • nbardy 17 days ago
        I hate to be pedantic, but the llm is definitely doing embedding math. In fact that’s all it does.
        • n2d4 17 days ago
          Sure! Although I think we both agree that the way those embeddings are transformed is significantly different ;)

          (what I meant to say is that it doesn't do embedding math "LIKE" the OP — not that it doesn't do embedding math at all.)

          • coolcase 17 days ago
            Yeah we'd be impressed if an LLM calculated the product of a couple of 1000x1000 matrices.
      • godelski 17 days ago

          > The results are surprisingly good, I don't think I could've done better as a human
        
        I'm actually surprised that the performance is so poor and would expect a human to do much better. The GPT model has embedding PLUS a whole transformer model that can untangle the embedded structure.

        To clarify some of the issues:

          data is both singular and plural, being a mass noun[0,1]. Datum is something you'll find in the dictionary, but not common in use[2]. The dictionary lags actual definitions. I mean words only mean what we collectively agree they mean (dictionary definitely helps with that but we also invent words all the time -- i.e. slang). I see how this one could trick up a human, feeling the need to change the output and would likely consult a dictionary but I don't think that's a fair comparison here as LLMs don't have these same biases.
        
          King - crown really seems like it should be something like "man" or "person". The crown is the manifestation of the ruling power. We still use phrases like "heavy is the head that wears the crown" in reference to general leaders, not just monarchs.
        
          king - princess I honestly don't know what to expect. Man is technically gender neutral so I'll take this one.
        
          king - queen I would expect similar outputs to the previous one. Don't quite agree here.
        
          queen - king I get why is removing royalty but given the previous (two) results I think is showing a weird gender bias. Remember that queen is something like (woman + crown) and king is akin to (man + crown). So subtracting should be woman - man. 
        
          The others I agree with. These were actually done because I was quite surprised at the results and was thinking about the aforementioned gender bias.
        
          > But keep in mind that this doesn't do embedding math like OP!
        
        I think you are misunderstanding the architecture of these models. The embedding sub-network is the translation of text to numeric tokens. You'll find mention of the embedding sub-networks in both the GPT3[3] and GPT4 papers. Though they are given lower importance than other works. While much smaller than the main network, don't forget that embedding networks are still quite large. For the smaller models they constitute a significant part of the total parameter count[4]

        After the embedding sub-network is your main transformer network. The purpose of this network is to perform embedding math! It is just that the goal is to do significantly more complicated math. Remember, these are learnable mappings (see Optimal Transport). We're just breaking it down into their two main intermediate mappings. But the embeddings still end up being a bottleneck. It is your literal gateway from words to numbers.

        [0] https://en.wikipedia.org/wiki/Mass_noun

        [1] https://www.merriam-webster.com/dictionary/data

        [2] https://www.sciotoanalysis.com/news/2023/1/18/this-data-or-t...

        [3] https://arxiv.org/abs/2005.14165

        [4] https://arxiv.org/abs/2303.08774

        [4] https://www.lesswrong.com/posts/3duR8CrvcHywrnhLo/how-does-g...

        • n2d4 17 days ago
          You are being unnecessarily cynical. These are all subjective. I thought "datum" and "datasets" was quite clever, and while I would've chosen "man" for "king - crown" myself, I actually find "ruler" a better solution after seeing it. But each to their own.

          The rant about network architecture misses my point, which is that an LLM does not just do a linear transformation and a similarity search. Sure, in the most abstract sense it still just computes an output embedding from two input embeddings, but only in a very distant, pedantic way. (Actually, to be VERY pedantic, that would not even be true, because ChatGPT's tokenizer embeds tokens, not words. The in- and output of the model is more than just the semantic embedding of words; using two different but semantically equivalent words may result in different outputs with a transformer LLM, but not in a word semantics model.)

          I just thought it was cool that ChatGPT is so good at it.

          • godelski 17 days ago
            I'm an engineer and researcher, it is my job to find problems, so that they can be resolved. I'd say this is different from being cynical as that tends to be dismissive. I understand how my comment can come off that way, though it wasn't my intention, so I'm clarifying.

            You're right that there's subjectivity but not infinitely so. There is a bound to this and that's both required for language to work and for us to build these models. I did agree that the data one was tricky so not really going to argue, I was just pointing out a critical detail given that the models learn through pattern matching rather than a dictionary. It's why I made the comment about humans. As for ruler minus crown, I gave my explication, would you care to share yours? I'd like to understand your point of view so I can better my interpretation of the results, because frankly I don't understand. What is the semantic relationship being changed if not the attribute of ruler?

            The architecture part was a miscommunication. I hope you understand how I misunderstood you when you said "this doesn't do embedding math like OP!". It is clear I'm not alone either.

              > Actually, to be VERY pedantic, that would not even be true, because ChatGPT's tokenizer embeds tokens, not words.
            
            To be pedantic, people generally refer to the tokenization and embedding simply as embedding. It's the common verbiage. This is because with BPE you are performing these steps simultaneously and the term is appropriate given the longer usage in math.

            I was just trying to help you understand a different viewpoint.

        • drabbiticus 17 days ago
          The specific cherry-picked examples from GP make sense to me.

             data + plural    = datasets 
             data - plural    = datum
          
          If +/- plural can be taken to mean "make explicitly plural or singular", then this roughly works.

             king - crown     = ruler
          
          Rearrange (because embeddings are just vector math), and you get "king = ruler + crown". Yes, a king is a ruler who has a crown.

             king - princess  = man
          
          This isn't great, I'll grant, but there are many YA novels where someone becomes king (eventually) through marriage to a princess, or there is intrigue for the princess's hand for reasons of kingly succession, so "king = man + princess" roughly works.

             king - queen     = prince
             queen - king     = woman
          
          I agree it's hard to make sense of "king - queen = prince". "A queen is a woman king" is often how queens are described to young children. In Chinese, it's actually the literal breakdown of 女王. I also agree there's a gender bias, but also literally everything about LLMs and various AI trained on large human-generated data encodes the bias of how we actually use language and thought patterns. It's one of the big concerns of those in the civil liberties space. Search "llm discrimination" or similar for more on this.

          Playing around with age/time related gives a lot of interesting results:

              adult + age = adulthood
              child + age = female child
              year + age = chronological age
              time + year = day
              child + old = today
              adult - old = adult body
              adult - age = powerhouse
              adult - year = man
          
          I think a lot of words are hard to distill into a single embedding. A word may embed a number of conceptually distinct definitions, but my (incomplete) understanding of embeddings is that they are not context-sensitive, right? So averaging those distinct definitions through 1 label is probably fraught with problems when trying to do meaningful vector math with them that context/attention are able to help with.

          [EDIT:formatting is hard without preview]

        • Sharlin 17 days ago
          "King-crown=ruler" is IMO absolutely apt. Arguing that "crown" can be used metaphorically is a bit disingenuous because first, it's very rarely applied to non-monarchs, and is a very physical, concrete symbol of power that separates monarchs from other rulers.

          "King-princess=man" can be thought to subtract the "royalty" part of "king"; "man" is just as good an answer as any else.

          "King-queen=prince" I'd think of as subtracting "ruler" from "king", leaving a male non-ruling member of royalty. "gender-unspecified non-ruling royal" would be even better, but there's no word for that in English.

          • FabHK 16 days ago
            “King - queen = male” strikes me as logical, if we take king = (+human, +male, +royal), and queen = (+human, -male, +royal), then the difference is (0human, 2male, 0royal).
          • godelski 17 days ago

              > it's very rarely applied to non-monarchs
            
            I take your point but highly disagree that it's disingenuous to view this metaphorically. The crown has always been a symbol of the seat of power and that usage dates back centuries. I've seen it commonly used to refer to leadership in general. Actually more often.

              - https://en.wikipedia.org/wiki/Heavy_Lies_the_Crown
              - https://en.wikipedia.org/wiki/Heavy_Is_the_Head
            
            Notably even in the usage of Henry IV that the idiom draws from is using it in the metaphorical sense, despite also talking about a ruler so would wear a literal crown. There's similar frequent usage in widely popular shows like Game of Thrones. So I hope you can see why I really do not think it's fair to call me disingenuous. The metaphorical usage is extremely common.

            I'll buy the king price relationship. That's fair. But it also seems to be in disagreement from the king queen one.

      • amdivia 16 days ago
        Can you do the same but each line is done in a seperate context?
      • refulgentis 17 days ago
        ...welcome to ChatGPT, everyone! If you've been asleep since...2022?

        (some might say all an LLM does is embeddings :)

    • mathgradthrow 17 days ago
      Distance is extremely well defined in high dimensional spaces. That isn't the problem.
      • godelski 17 days ago
        Would you care to elaborate? To clarify, I mean that variance reduces as dimensionality increases
    • Affric 17 days ago
      Yeah I did similar tests and got similar results.

      Curious tool but not what I would call accurate.

    • gweinberg 17 days ago
      I got a bunch of red stuff also. I imagine the author cached embeddings for some words but not really all that many to save on credits. I gave it mermaid - woman and got merman, but when I tried to give it boar + woman - man or ram + woman - man, it turns out it has never heard of rams or boars.
    • thatguysaguy 17 days ago
      Can you elaborate on what the unsolved problem you're referring to is?
      • godelski 16 days ago
        Dealing with metrics in high dimensions. As you increase dimensionality the variance decreases, leading to indistinguishablity.

        You can get some help in high dimensions when you're more concerned with (clearly disjoint) clusters. But this is akin to doing a dimensional reduction, treating independent clusters as individual points. (Say we have set S which has disjoint subsets {S_0,...,S_n}, your new set is now {a_0,...,a_n}, where each a_i is an element representing all elements in S_i. Think like "set of sets") But you do not get help with interrelationships (i.e. d(s_x,s_y) \in S_i \forall x≠y) and I think you can gather that when clusters are not clearly disjoint then we're in the same situation as trying to differentiate inter-cluster.

        Understanding this can help you understand why these models (including LLMs) are good in broader concepts like differentiating between obvious things but struggle more in nuance. A good litmus test is to ask them about any subject you have good deep knowledge in. Essentially test yourself for Murray-Gelmann Amnesia. The things are designed for human preference. When they fail they're likely to fail without warning (i.e. in ways that are not so obvious)

    • sdeframond 17 days ago
      Such results are inherently limited because a same word can have different meanings depending on context.

      The role of the Attention Layer in LLMs is to give each token a better embedding by accounting for context.

    • charlieyu1 16 days ago
      I think you need to do A-B+C types? A+B or A-B wouldn’t make much sense when the magnitude changes
    • virgilp 17 days ago
      hacker+news-startup = golfer
    • pjc50 16 days ago
      Ah yes, 女 + 子 = girl but if combined in a kanji you get 好 = like.
  • montebicyclelo 17 days ago
    > king-man+woman=queen

    Is the famous example everyone uses when talking about word vectors, but is it actually just very cherry picked?

    I.e. are there a great number of other "meaningful" examples like this, or actually the majority of the time you end up with some kind of vaguely tangentially related word when adding and subtracting word vectors.

    (Which seems to be what this tool is helping to illustrate, having briefly played with it, and looked at the other comments here.)

    (Btw, not saying wordvecs / embeddings aren't extremely useful, just talking about this simplistic arithmetic)

    • loganmhb 17 days ago
      I once saw an explanation which I can no longer find that what's really happening here is also partly "man" and "woman" are very similar vectors which nearly cancel each other out, and "king" is excluded from the result set to avoid returning identities, leaving "queen" as the closest next result. That's why you have to subtract and then add, and just doing single operations doesn't work very well. There's some semantic information preserved that might nudge it in the right direction but not as much as the naive algebra suggests, and you can't really add up a bunch of these high-dimensional vectors in a sensible way.

      E.g. in this calculator "man - king + princess = woman", which doesn't make much sense. "airplane - engine", which has a potential sensible answer of "glider", instead "= Czechoslovakia". Go figure.

    • jbjbjbjb 17 days ago
      Well when it works out it is quite satisfying

      India - Asia + Europe = Italy

      Japan - Asia + Europe = Netherlands

      China - Asia + Europe = Soviet-Union

      Russia - Asia + Europe = European Russia

      calculation + machine = computer

      • kgeist 17 days ago
        Interesting:

          Russia - Europe = Putin
          Ukraine + Putin = Russia
          Putin - Stalin = Bush
          Stalin - purge = Lenin
        
        That means Bush = Ukraine+Putin-Europe-Lenin-purge.

        However, the site gives Bush -4%, second best option (best is -2%, "fleet ballistic missile submarine", not sure what negative numbers mean).

        • nxa 16 days ago
          My interpretation of negative numbers is that no "synonym" was found (no vector pointing in the same direction), and that the closest expression on record is something with an opposite meaning (pointing in reverse direction), so I'd say that's an antonym.
      • trhway 17 days ago
        democracy - vote = progressivism

        I'll have to mediate on that.

        • blipvert 17 days ago
          person + man + woman + camera + television = user
    • groby_b 17 days ago
      I think it's worth keeping in mind that word2vec was specifically trained on semantic similarity. Most embedding APIs don't really give a lick about the semantic space

      And, worse, most latent spaces are decidedly non-linear. And so arithmetic loses a lot of its meaning. (IIRC word2vec mostly avoided nonlinearity except for the loss function). Yes, the distance metric sort-of survives, but addition/multiplication are meaningless.

      (This is also the reason choosing your embedding model is a hard-to-reverse technical decision - you can't just transform existing embeddings into a different latent space. A change means "reembed all")

    • Retr0id 17 days ago
      I think it's slightly uncommon for the vectors to "line up" just right, but here are a few I tried:

      actor - man + woman = actress

      garden + person = gardener

      rat - sewer + tree = squirrel

      toe - leg + arm = digit

    • gregschlom 17 days ago
      Also, as I just learned the other day, the result was never equal, just close to "queen" in the vector space.
      • charcircuit 17 days ago
        And queen isn't even the closest.
        • mcswell 17 days ago
          What is the closest?
          • charcircuit 17 days ago
            Usually king is.
            • Narew 17 days ago
              yes and it's only work because we prevent the output to be in the input.
            • KeplerBoy 17 days ago
              That would be hilariously disappointing.
      • chis 17 days ago
        I mean they are floating point vectors so
    • raddan 17 days ago
      > is it actually just very cherry picked?

      100%

    • bee_rider 17 days ago
      Hmm, well I got

          cherry - picker = blackwood
      
      if that helps.
  • spindump8930 17 days ago
    First off, this interface is very nice and a pleasure to use, congrats!

    Are you using word2vec for these, or embeddings from another model?

    I also wanted to add some flavor since it looks like many folks in this thread haven't seen something like this - it's been known since 2013 that we can do this (but it's great to remind folks especially with all the "modern" interest in NLP).

    It's also known (in some circles!) that a lot of these vector arithmetic things need some tricks to really shine. For example, excluding the words already present in the query[1]. Others in this thread seem surprised at some of the biases present - there's also a long history of work on that [2,3].

    [1] https://blog.esciencecenter.nl/king-man-woman-king-9a7fd2935...

    [2] https://arxiv.org/abs/1905.09866

    [3] https://arxiv.org/abs/1903.03862

    • nxa 17 days ago
      Thank you! I actually had a hard time finding prior work on this, so I appreciate the references.

      The dictionary is based on https://wordnet.princeton.edu/, no word2vec. It's just a plain lookup among precomputed embeddings (with mxbai-embed-large). And yes, I'm excluding words that are present in the query because.

      It would be interesting to see how other models perform. I tried one (forgot the name) that was focused on coding, and it didn't perform nearly as well (in terms of human joy from the results).

      • kaycebasques 17 days ago
        (Question for anyone) how could I go about replicating this with Gemini Embedding? Generate and store an embedding for every word in the dictionary?
        • nxa 17 days ago
          Yes, that's pretty much what it is. Watch out for homographs.
  • antidnan 17 days ago
    Neat! Reminds me of infinite craft

    https://neal.fun/infinite-craft/

    • thaumasiotes 17 days ago
      I went to look at infinite craft.

      It provides a panel filled with slowly moving dots. Right of the panel, there are objects labeled "water", "fire", "wind", and "earth" that you can instantiate on the panel and drag around. As you drag them, the background dots, if nearby, will grow lines connecting to them. These lines are not persistent.

      And that's it. Nothing ever happens, there are no interactions except for the lines that appear while you're holding the mouse down, and while there is notionally a help window listing the controls, the only controls are "select item", "delete item", and "duplicate item". There is also an "about" panel, which contains no information.

      • n2d4 17 days ago
        In the panel, you can drag one of the items (eg. Water) onto another one (eg. Earth), and it will create a new word (eg. Plant). It uses AI, so it goes very deep
  • lcnPylGDnU4H9OF 17 days ago
    Some of these make more sense than others (and bookshop is hilarious even if it's only the best answer by a small margin; no shade to bookshop owners).

      map - legend = Mercator projection
      noodle - wheat = egg noodle
      noodle - gluten = tagliatelle
      architecture - calculus = architectural style
      answer - question = comment
      shop - income = bookshop
      curry - curry powder = cuisine
      rice - grain = chicken and rice
      rice + chicken = poultry
      milk + cereal = grain
      blue - yellow = Fiji
      blue - Fiji = orange
      blue - Arkansas + Bahamas + Florida - Pluto = Grenada
    • C-x_C-f 17 days ago
      I don't want to dump too many but I found

         chess - checkers = wormseed mustard (63%)
      
      pretty funny and very hard to understand. All the other options are hyperspecific grasslike plants like meadow salsify.
      • ccppurcell 17 days ago
        My philosophical take on it is that natural language has many many more dimensions than we could hope to represent. Whenever you do dimension reduction you lose information.
    • ActionHank 17 days ago
      dog - fur = Aegean civilization
  • jumploops 17 days ago
    This is super neat.

    I built a game[0] along similar lines, inspired by infinite craft[1].

    The idea is that you combine (or subtract) “elements” until you find the goal element.

    I’ve had a lot of fun with it, but it often hits the same generated element. Maybe I should update it to use the second (third, etc.) choice, similar to your tool.

    [0] https://alchemy.magicloops.app/

    [1] https://neal.fun/infinite-craft/

  • lightyrs 17 days ago
    I don't get it but I'm not sure I'm supposed to.

        life + death = mortality
        life - death = lifestyle
    
        drug + time = occasion
        drug - time = narcotic
    
        art + artist + money = creativity
        art + artist - money = muse
    
        happiness + politics = contentment
        happiness + art      = gladness
        happiness + money    = joy
        happiness + love     = joy
    • bee_rider 17 days ago

          Life + death = mortality  
      
      is pretty good IMO, it is a nice blend of the concepts in an intuitive manner. I don’t really get

         drug + time = occasion
      
      But

         drug - time = narcotic
      
      Is kind of interesting; one definition of narcotic is

      > a drug (such as opium or morphine) that in moderate doses dulls the senses, relieves pain, and induces profound sleep but in excessive doses causes stupor, coma, or convulsions

      https://www.merriam-webster.com/dictionary/narcotic

      So we can see some element of losing time in that type of drug. I guess? Maybe I’m anthropomorphizing a bit.

    • grey-area 17 days ago
      Does the system you’re querying ‘get it’? From the answers it doesn’t seem to understand these words or their relations. Once in a while it’ll hit on something that seems to make sense.
  • __MatrixMan__ 17 days ago
    Here's a challenge: find something to subtract from "hammer" which does not result in a word that has "gun" as a substring. I've been unsuccessful so far.
    • mrastro 17 days ago
      The word "gun" itself seems to work. Package this as a game and you've got a pretty fun game on your hands :)
    • aniviacat 17 days ago
      Gun related stuff works: bullet, holster, barrel

      Other stuff that works: key, door, lock, smooth

      Some words that result in "flintlock": violence, anger, swing, hit, impact

    • Retr0id 17 days ago
      Well that's easy, subtract "gun" :P
    • ttctciyf 16 days ago
      hammer - keyboard = hammerhead

      Makes no sense, admittedly!

      - dulcimer and - zither are both in firmly in .*gun.* territory it seems..

    • downboots 17 days ago
      Bullet
    • soxfox42 17 days ago
      hammer - red = lock
    • tough 17 days ago
      hammer + man = adult male body (75%)
      • rdlw 17 days ago
        Close, that's addition
    • neom 17 days ago
      if I'm allowed only 1 something, I can't find anything either, if I'm allowed a few somethings, "hammer - wine - beer - red - child" will get you there. Guessing given that a gun has a hammer and is also a tool, it's too heavily linked in the small dataset.
  • grey-area 17 days ago
    As you might expect from a system with knowledge of word relations but without understanding or a model of the world, this generates gibberish which occasionally sounds interesting.
  • nxa 17 days ago
    This might be helpful: I haven't implemented it in the UI, but from the API response you can see what the word definitions are, both for the input and the output. If the output has homographs, likeliness is split per definition, but the UI only shows the best one.

    Also, if it gets buried in comments, proper nouns need to be capitalized (Paris-France+Germany).

    I am planning on patching up the UI based on your feedback.

  • GrantMoyer 17 days ago
    These are pretty good results. I messed around with a dumber and more naive version of this a few years ago[1], and it wasn't easy to get sensinble output most of the time.

    [1]: https://github.com/GrantMoyer/word_alignment

  • rdlw 17 days ago
    I've always wondered if there's s way to find which vectors are most important in a model like this. The gender vector man-woman or woman-man is the one always used in examples, since English has many gendered terms, but I wonder if it's possible to generate these pairs given the data. Maybe to list all differences of pairs of vectors, and see if there are any clusters. I imagine some grammatical features would show up, like the plurality vector people-person, or the past tense vector walked-walk, but maybe there would be some that are surprisingly common but don't seem to map cleanly to an obvious concept.

    Or maybe they would all be completely inscrutable and man-woman would be like the 50th strongest result.

  • ale42 17 days ago
    Not what it's meant for, I guess, but it's not very strong at chemistry ;-)

      salt - chlorine + potassium = sodium
      chlorine + sodium = rubidium
      water - hydrogen = tap water
    
    It also has some other interesting outputs:

      woman + man = adult female body (already reported by someone else)
      man - hand = woman
      woman - hand = businesswoman
      businessman - male + female = industrialist
      telephone + antenna = television equipment
      olive oil - oil = hearth money
  • anonu 17 days ago
    Reminds me of the very annoying word game https://contexto.me/en/
  • skeptrune 17 days ago
    This is super fun. Offering the ranked matches makes it significantly more engaging than just showing the final result.
  • ericdiao 17 days ago
    Interesting: parent + male = female (83%)

    Can not personally find the connection here, was expecting father or something.

    • ericdiao 17 days ago
      Though dad is in the list with lower confidence (77%).

      High dimension vector is always hard to explain. This is an example.

  • afandian 17 days ago
    There was a site like this a few years ago (before all the LLM stuff kicked off) that had this and other NLP functionality. Styling was grey and basic. That’s all I remember.

    I’ve been unable to find it since. Does anyone know which site I’m thinking of?

  • clbrmbr 17 days ago
    A few favorites:

    wine - beer = grape juice

    beer - wine = bowling

    astrology - astronomy + mathematics = arithmancy

  • galaxyLogic 17 days ago
    What about starting with the result and finding set of words that when summed together give that result?

    That could be seen as trying to find the true "meaning" of a word.

  • nxa 17 days ago
    artificial intelligence - bullsh*t = computer science (34%)
    • behnamoh 17 days ago
      This. I'm tired of so many "it's over, shocking, game changer, it's so over, we're so back" announcements that turn out to be just gpt-wrappers or resume-builder projects.

      Very few papers that actually say something meaningful are left unnoticed, but as soon as you say something generic like "language models can do this", it gets featured in "AI influencer" posts.

  • tiborsaas 17 days ago
    I've tried to get to "garage", but failed at a few attempts, ChatGPT's ideas also seemed reasonable, but failed. Any takers? :)
    • mynameajeff 16 days ago
      "car + house + door" worked for me (interestingly "car + home + door" did not)
      • tiborsaas 16 days ago
        Thanks, nice :) House sounds more general, I guess.

        I've had some fun finding this:

            car - move + shape = car wheel
  • fallinghawks 17 days ago
    goshawk-cocaine = gyrfalcon , which is funny if you know anything about goshawks and gyrfalcons

    (Goshawks are very intense, gyrs tend to be leisurely in flight.)

  • neom 17 days ago
    cool but not enough data to be useful yet I guess. Most of mine either didn't have the words or were a few % off the answer, vehicle - road + ocean gave me hydrosphere, but the other options below were boat, ship, etc. Klimt almost made it from Mozart - music + painting. doctor - hospital + school = teacher, nailed it.

    Getting to cornbread elegantly has been challenging.

  • yigitkonur35 17 days ago
    shows how bad embeddings are in a practical way
  • ignat_244639 17 days ago
    Huh, that's strange, I wanted to check whether your embeddings have biases, but I cannot use "white" word at all. So I cannot get answer to "man - white + black = ?".

    But if I assume the biased answer and rearrange the operands, I get "man - criminal + black = white". Which clearly shows, how biased your embeddings are!

    Funny thing, fixing biases and ways to circumvent the fixes (while keeping good UX) might be much challenging task :)

  • TZubiri 17 days ago
    I'm getting Navralitova instead of queen. And can't get other words to work, I get red circles or no answer at all.
  • Jimmc414 17 days ago
    dog - cat = paleolith

    paleolith + cat = Paleolithic Age

    paleolith + dog = Paleolithic Age

    paleolith - cat = neolith

    paleolith - dog = hand ax

    cat - dog = meow

    Wonder if some of the math is off or I am not using this properly

    • Glyptodon 16 days ago
      I figure the mathematically highest value must defer from the semantically most accurate relatively frequently. (Because Car - Wheel = Touring Car doesn't make a lot of sense to me.)
  • andrelaszlo 16 days ago

        hand - arm + leg = vertebrate foot
        snowman - man =  snowflake
        snowman - snow = snowbank
  • e____g 17 days ago
    man - intelligence = woman (36%)

    woman + intelligence = man (77%)

    Oof.

  • wdutch 17 days ago
    It's interesting that I find loops. For example

    car + stupid = idiot, car + idiot = stupid

  • nikolay 17 days ago
    Really?!

      man - brain = woman
      woman - brain = businesswoman
    • nxa 17 days ago
      I probably should have prefaced this with "try at your own risk, results don't reflect the author's opinions"
      • dmonitor 17 days ago
        I'm sure it would be trivial to get it to say something incredibly racist, so that's probably a worthwhile disclaimer to put on the website
    • dalmo3 17 days ago
      I think subtraction is broken. None of what I tried made any sense. Water - oxygen = gin and tonic.
    • sapphicsnail 17 days ago
      Telling that Jewess, feminist, and spinster were near matches as well.
    • karel-3d 17 days ago
      woman+penis=newswoman (businesswoman is second)

      man+vagina=woman (ok that is boring)

    • 2muchcoffeeman 17 days ago
      Man - brain = Irish sea
      • nikolay 17 days ago
        Case matters, obviously! Try "man" with a lower-case "M"!
  • cabalamat 17 days ago
    What does it mean when it surrounds a word in red? Is this signalling an error?
    • iambateman 17 days ago
      Try Lower casing, my phone tried to capitalize and it was a problem.
    • fallinghawks 17 days ago
      Seems to be a word not in its dictionary. Seems to not have any country or language names.

      Edit: these must be capitalized to be recognized.

    • nxa 17 days ago
      Yes, word in red = word not found mostly the case when you try plurals or non-nouns (for now)
      • rpastuszak 17 days ago
        This is neat!

        I think you need to disable auto-capitalisation because on mobile the first word becomes uppercase and triggers a validation error.

  • dtj1123 17 days ago
    "man-intelligence=woman" is a particularly interesting result.
  • ericdiao 17 days ago
    wine - alcohol = grape juice (32%)

    Accurate.

  • coolcase 17 days ago
    Oh you have all the damn words. Even the Ricky Gervais ones.
  • downboots 17 days ago
    mathematics - Santa Claus = applied mathematics

    hacker - code = professional golf

  • krishna-vakx 17 days ago
    for founders :

    love + time = commitment

    boredom + curiosity = exploration

    vision + execution = innovation

    resilience - fear = courage

    ambition + humility = leadership

    failure + reflection = learning

    knowledge + application = wisdom

    feedback + openness = improvement

    experience - ego = mastery

    idea + validation = product-market fit

  • matallo 17 days ago
    uncle + aunt = great-uncle (91%)

    great idea, but I find the results unamusing

    • HWR_14 17 days ago
      Your aunt's uncle is your great-uncle. It's more correct than your intuition.
      • matallo 17 days ago
        I asked ChatGPT (after posting my comment) and this is the response. "Uncle + Aunt = Great-Uncle is incorrect. A great-uncle is the brother of your grandparent."
  • havkom 17 days ago
    I tried:

    -red

    and:

    red-red-red

    But it did not work and did not get any response. Maybe I am stupid but should this not work?

  • hagen_dogs 17 days ago
    fluid + liquid = solid (85%) -- didn't expect that

    blue + red = yellow (87%) -- rgb, neat

    black + {red,blue,yellow,green} = white 83% -- weird

    • moefh 17 days ago
      > blue + red = yellow (87%) -- rgb, neat

      Blue + red is magenta. Yellow would be red + green.

      None of these results make much sense to me.

  • MYEUHD 17 days ago
    king - man + woman = queen

    queen - woman + man = drone

    • bee_rider 17 days ago
      The second makes sense, I think, if you are a bee.
      • neom 17 days ago
        So, are you a bee keeper then?
  • Glyptodon 16 days ago
    Car - Wheel(s) doesn't really have results I'd guess at (boat, sled, etc.). Just specific four wheeled vehicles.
  • hello_computer 17 days ago
    doesn’t do anything on my iphone
  • Finbel 17 days ago
    London-England+France=Maupassant
  • firejake308 17 days ago
    King-man+woman=Navratilova, who is apparently a Czech tennis player. Apparently, it's very case-sensitive. Cool idea!
    • fph 17 days ago
      "King" (capital) probably was interpreted as https://en.wikipedia.org/wiki/Billie_Jean_King , that's why a tennis player showed up.
      • nxa 17 days ago
        when I first tried it, king was referring to the instrument and I was getting a result king-man+woman=flute ... :-D
      • BeetleB 17 days ago
        Heh. This is fun:

        Navratilova - woman + man = Lendl

  • cosmicgadget 17 days ago

      car + dragon = panzer
  • maxcomperatore 17 days ago
    Just use a LLM api to generate results, it will be far better and more accurate than a weird home cooked algorithm
  • darepublic 17 days ago
    man - courage = husband
  • kylecazar 17 days ago
    Woman + president = man
  • zerof1l 17 days ago
    male + age = female

    female + age = male

  • jryb 17 days ago
    Just inverting the canonical example fails: queen - woman + man = drone
    • x3y1 16 days ago
      This kind of makes sense for bees.
  • doubtfuluser 17 days ago
    doctor - man + woman = medical practitioner

    Good to understand this bias before blindly applying these models (Yes- doctor is gender neutral - even women can be doctors!!)

    • heyitsguay 17 days ago
      Fwiw, doctor - woman + man = medical practitioner too
  • blobbers 17 days ago
    rice + fish = fish meat

    rice + fish + raw = meat

    hahaha... I JUST WANT SUSHI!

  • 7373737373 17 days ago
    it doesn't know the word human
  • G1N 17 days ago
    twelve-ten+five=

    six (84%)

    Close enough I suppose

  • bluelightning2k 17 days ago
    potato + microwave = potato tree
  • tlhunter 17 days ago
    man + woman = adult female body
  • downboots 17 days ago
    three + two = four (90%)
    • LadyCailin 17 days ago
      Haha, yes, this was my first thought too. It seems it’s quite bad at actual math!
  • erulabs 17 days ago
    dog - fur = Aegean civilization (22%)

    huh

  • atum47 17 days ago
    horse+man

    78% male horse 72% horseman

  • adzm 17 days ago
    noodle+tomato=pasta

    this is pretty fun

    • growlNark 17 days ago
      Surely the correct answer would be `pasta-in-tomato-sauce`? Pasta exists outside of tomato sauce.
  • ainiriand 17 days ago
    dog+woman = man

    That's weird.

  • mannykannot 17 days ago
    Now I'm wondering if this could be helpful in doing the NY Times Connections puzzle.
  • quantum_state 17 days ago
    The app produces nonsense ... such as quantum - superposition = quantum theory !!!
  • kataqatsi 17 days ago
    garden + sin = gardening

    hmm...

  • woodruffw 17 days ago
    colorless+green+ideas doesn't produce anything of interest, which is disappointing.
    • dmonitor 17 days ago
      well green is not a creative color, so that's to be expected
  • insane_dreamer 16 days ago
    carbon + oxygen = nitrogen

    LOL

  • throwaway984393 17 days ago
    [dead]
  • ephou7 17 days ago
    [flagged]
  • ezbie 17 days ago
    Can someone explain me what the fuck this is supposed to be!?
    • mhitza 17 days ago
      Semantical subtraction within embeddings representation of text ("meaning")
  • spinarrets 16 days ago
    cheeseburger-giraffe+space-kidney-monkey = cheesecake