60 comments

  • dweinus 15 days ago
    Andrew, you and your friends should be proud. It's really encouraging to see people, especially in your generation, thinking seriously about the problem of misinformation. There are fundamental challenges you all will face in this idea:

    - most current LLMs are trained on large amounts of web data that itself contains facts, opinions, and misinformation. These things are treated equally, so I would expect the LLM to get common facts right, but also to represent opinions or misinformation as facts when they are pervasive.

    - LLMs "hallucinate" and tend not to know when to say "I don't know" or to not try to fact-check something that is not factual in nature.

    ...in short, I would expect LLMs to be an unreliable fact checker, which has the potential to do as much harm as good.

    • helloduck1234 15 days ago
      Yea, thank you for your feedback!
      • reaperman 15 days ago
        > “…which has the potential to do as much harm as good.”

        I find this is one of the more difficult things for people to learn to fully integrate into their psyche. Many people never learn to truly care about this and everything it means. They go on forever primarily caring about what’s good for them personally.

        • paulcole 15 days ago
          Strong disagree. Put yourself first nearly 100% of the time. Nobody cares about you so don’t think others are doing anything but prioritizing themselves.

          I mean look at the world. Essentially everybody puts themselves first and it’s clear as day. Don’t trick yourself into being the sap doing things for the greater good.

          And who cares if there’s equal potential for harm and good? The harm might be less than we imagine and the good might be better than we think it could be. “This might be bad” is a terrible reason to not do something. Nearly everything might be bad!

          People are pretty resilient. They can generally deal with you being selfish.

          • spencerflem 15 days ago
            That's grim. Don't light yourself on fire to keep others warm or nothing, but in my experience most people are hoping to make the world better where they can.

            This sort of hustle culture belief is definitely present in the world, especially among finance and us techie types, but there's tons of examples of people Not acting like this. Teachers don't do it for the pay, etc. There's a reason meaningful jobs tend to pay less, and its because so many people want to do useful helpful things that badly.

            Anyways point is, that I want to explicitly condemn this type of thinking. Yeah don't let fear of doing the wrong thing paralyze you but also think through the consequences

            • paulcole 15 days ago
              > most people are hoping to make the world better where they can.

              Are they actually making the world better or just hoping they can?

              Everybody in the developed world, if they wanted to make the world better, would live drastically differently because of their impact on the environment/climate change.

              But it’s more fun to just say that we’re hoping to make the world better so we don’t have to acknowledge how selfish we actually are.

              I’m not talking about hustle culture here. I’m talking about the selfishness we all partake in and do our best to ignore.

              • bruce511 15 days ago
                I agree that most (privileged) people behave selfishly. You don't have to look far to see that. That doesn't make it good.

                Which is why advice to be explicitly selfish is jarring. We don't need advice yo do that, we excel at it naturally.

                There are however great rewards to be had from being unselfish. We can see that around us too. Being at least aware of our proclivities is the first step in discovering the benefits of countering them.

                • paulcole 15 days ago
                  If people are naturally selfish and they’re giving you advice to not be selfish, why aren’t they taking their own advice? And why should you take theirs?

                  It’s like a burglar telling you it’s a good idea to leave your doors unlocked.

                  • bruce511 14 days ago
                    Lots of people act, at least partially, in unselfish ways.

                    Selfishness is not a binary characteristic. There are degrees of selfishness - and spheres of selfishness.

                    Just because something is in our nature, it does not mean we have to behave that way all the time. Most people are neither purely selfish, nor purely unselfish.

                    To answer your question though, since selfishness exists on a scale, your assumption that people offering advice are not also practicing it is, at best, a conclusion without data.

                    • paulcole 14 days ago
                      No data? I’m relying on what you said…

                      > I agree that most (privileged) people behave selfishly

                      Another great example are the tech bros telling kids to go into the trades. If it was such a great idea, why aren’t they plumbers?

                      • dartos 14 days ago
                        Maybe 3 or 4 years ago I’d tell everyone to learn web development and get a cushy frontend engineer job.

                        Now I don’t. It’s too hard to enter tech right now. Juniors not coming from colleges are basically ignored in the job hunt.

                        I generally think trade skills are better for most people than a college degree (I don’t even have one.)

                      • spencerflem 14 days ago
                        fwiw I agree with you on these gripes, tech bros are often selfish, its something I really hate about our industry and I'm struggling a lot trying to find a software job that does good in the world because mine doesn't and it makes me feel awful, but I am trying at least.

                        And anyone living an unsustainable lifestyle (almost everyone) is selfish though its such a huge problem that putting the blame on any individual feels wrong.

                        I think where we disagree is that you're so all or nothing with this. People can be selfish in some ways and not others. You can live unsustainably while also having principles in other ways. Things could always be worse. I encourage anyone to care about the world as much as you can without being self destructive about it, and I really try to live that way myself.

                • PM_me_your_math 14 days ago
                  [dead]
              • softsound 14 days ago
                Sometimes the best one can do is try. It's worse to give up.

                And are you a good judge on if it's better? Better can be very complicated, it could be better in part and worse in another but it's not wrong to try... (Unless well it really becomes really corrupted) We as people tend to overcorrect and so life will always swing from one end of a spectrum to another.

                I think people do want to do good in the world they just get overwhelmed, or think it's impossible, or discredit the small good things they do.

                Sometimes people think if you don't do a major good thing all the small things don't add up - honestly though it's often better to do smaller good long term things then a major non lasting one.

                Besides the world is getting greener, healthier and happier all the time if you look in the right places. You will always find what you ask for, so look for good and you will find it. I follow so many YouTube channels showing how much the environment is improving and how such small things really get better. I try to sponsor them when I can and I hope to do more in the future too.

                I also personally grow local fruit trees and various plants in my backyard to help local native species and while minor I am doing something even if small.

                • paulcole 14 days ago
                  > I follow so many YouTube channels showing how much the environment is improving and how such small things really get better. I try to sponsor them when I can and I hope to do more in the future too.

                  I take back everything I said about people being selfish.

                  Thank you for your service.

          • zero-sharp 15 days ago
            This is one of the worst things I've read on here. I hope it's a parody and that I'm just too stupid to understand it.
          • teapot7 15 days ago
            That's awful.
      • bruce511 15 days ago
        I'll add, that although this is a good project, it really doesn't matter if it works or not.

        I don't mean the fact checking part - thats legitimately a good thing to pursue. What I mean is that the value of this (to you and your co-creators) has enormous value to you way beyond the social good it might provide.

        For example at some point you're going to have to deal with nuance. Things are rarely purely right or wrong. (The earth is not round, but its a good first approximation for geological beginners.)

        So, I'd encourage you not to measure success here with "does it work", or how many users, or if LLMs are a suitable approach, or any metrics like that. The goal here shouldn't be popularity or "correctness".

        The most value you will get is the experience of building something, ideally in team. Of facing road-blocks and challenges and overcoming them. Or, to put it another way, have fun. And things that are easy are not fun...

        Congrats on the project. May it lead you forward to discovering more about how to code, more about the world, more about yourself. Don't shy away from the hard questions. But above all keep it fun.

      • echelon 15 days ago
        You're 16.

        This is awesome, and you're doing great. This is such strong signal for an amazing career and impact.

        Keep going!

      • nzealand 14 days ago
        Your response to critical feedback is excellent btw. Nice job all around!
    • dweinus 15 days ago
      Thinking out loud... I don't think these problems can be solved. If you are going to do it anyway, I would suggest:

      - Using a RAG architecture on top of a database of factual information. Wikipedia is probably your best bet. It is not 100% factual or correct either, but maybe as good as it gets. Scaling RAG to wikipedia size is not trivial, but I think it can be done.

      - Prompting the LLM to cite its sources so people can fact-check the fact-checker

      - Prompting the LLM to say it is unsure when something does not have a clear answer. I don't expect this to be reliable, but maybe somewhat better

      • CuriouslyC 15 days ago
        There's a whole art to prompting a LLM to say it's unsure. I need to write a blog post about this, it's deep.
        • visarga 15 days ago
          Sample a bunch of LLMs with the same question, if they disagree much then they are unsure. You can even sample the same LLM with high enough temperature, text augmentations, different prompts or different demonstrations. When they are correct they say the same thing, but when they make mistakes, they make different ones. This only works for factual or reasoning tasks, but that's where it matters.
          • codetrotter 15 days ago
            But how do you know if the LLMs agree, when all of them word the response differently

            For example

            LLM 1: Yes, it is true that fireworks were invented in China

            LLM 2: Fireworks were indeed invented in China

            • CuriouslyC 15 days ago
              Plot twist, you can ask LLMs to provide a bayesian prior on their belief in the truth value (from multiple perspectives, again) then plug that into a variety of algorithms.
            • Bjartr 15 days ago
              Ask another model if the two statements are in agreement of course! ;)
            • omneity 15 days ago
              This is trivially achievable with function calling, assuming the model you use supports this (which most models do at this point).

              Define a function `reportFactual(isFactual: boolean)` and you will get standardized, machine-readable answers to do statistics with.

              • inimino 15 days ago
                Simpler yet, just tell the model "Reply with 'Yes' or 'No'."
              • codetrotter 15 days ago
                I’ve used function calls with OpenAI. But are there any good local LLMs that you can run with Ollama that support function calling?
      • helloduck1234 15 days ago
        Yea ok, we already have the citing thing done, and are going to start working on the RAG architecture soon.
      • netrap 15 days ago
        You don't have to solve it, you just have to try...
  • elevation 15 days ago
    Don't hinge your business success on getting an unlimited number of humans to agree with your AI on an unlimited number of facts.

    The universe of facts is infinite.

    The larger a user base, the fewer facts they'll collectively agree on.

    If you take on a large user base and an infinite knowledge domain, you'll lose trust with users who disagree with your fact checking (rightly or wrongly.)

    Instead, give yourself a smaller scope where you can actually win. Train your tool to be a world-class fact checker in a specific domain. Then market to a userbase who explicitly already agree on the facts you check against. This smaller scope sets you up for technical success and builds experience, revenue, and user trust, all of which you can leverage to iterate into another domain more quickly.

    • helloduck1234 15 days ago
      Yea ok, eliminating misinformation is what we want to eventually do. But what we can sell is something that eliminates human errors (the backend also allows for that).
      • logtempo 14 days ago
        I think you could provide good things to Wikipedia with your tool. For instance, I recently read that earth spinning make earth 10^7 kg heavier. While the principle is corrShow HN: I'm 16 and building an AI based startup c... https://news.ycombinator.com/item?id=40222051ect (energy is mass), the source is really weak and it can easily be found that earth mass uncertainty is about 10^20 kg. Which make the whole point useless.

        I challenged your AI on this thematic : "a spring weight heavier when compressed", or "earth spinning make it heavier". Both results tell me it's not true despite it being correct (E=mc2).

        I admit I'm cherrypicking, because it does say croissant is not French. But can I trust it blindly? That's why you must provide sources. It's valuable to have sources when we talk about truth.

        good job anyway, keep working on what you like

  • vxxzy 15 days ago
    Despite what criticisms you may receive, you put something into existence. You may not have gotten everything right but, it is admirable. Congratulations. At your age, many of your detractors probably didn’t do something so bold. Keep with it.
    • abraae 15 days ago
      Also your responses here show you in a good light. It's easy to lose your cool when facing even sensible criticism online. Great work and you have an exciting career ahead of you.
    • helloduck1234 15 days ago
      Thank you!
  • kemotep 15 days ago
    Is there more where we can read about how it works?

    I tried testing it with the sentence “Toledo is the largest city in Ohio” and it’s suggestions were to replace this with paragraphs of text about what constitutes a city in Ohio and history about Ohio, which did happen to include that Columbus is currently the largest city in Ohio.

    That doesn’t seem strictly helpful as a fact checker as at first glance it isn’t even addressing the truthiness of the original sentence and just bloviates on about other details, if I even accept it’s suggestion.

    Is there a demo that shows how you expect people to use this?

    • helloduck1234 15 days ago
      Yea we are working on it. It is a thing that we are trying to fix, it giving too much or too little suggestions.
      • kemotep 15 days ago
        This is impressive for someone who is 16. My Two cents would be to add a short demo. A 30 second gif or video with accompanying text that explains how to use this. Just some kind of demonstration on how your interface works and what to expect the output to look like. And secondly to possibly help with the over suggestion/under suggestion issue, the grammar and writing assistance should be a separate mode from the fact checker.

        Again, I would love to know more about how this works in the sense of how does it determine facts, and as you alluded to in other comments how it avoids political opinions.

        Thanks for sharing.

      • pteraspidomorph 15 days ago
        ALso keep in mind that if you get a huge box full of text it's getting cut out by the UI and you can't reach the buttons!
  • BulutTheCat2 15 days ago
    Ok,

    I have heard a lot of good, and bad comments, so I want to clear some things up.

    First of all, we are using an OpenAI based back-end, we are in the process of developing our own LLM for the task of text fitting. Secondly, the LLM in our approach is currently only used for text translation. This meaning, we assume the LLM dosnt know anything, and thus before the users input even reaches the LLM it has to hit a couple other models which determine if the statement is a fact or not, then an intermediary text analyses model to extract understandable queries from the text which can then be used to search information about the topic from a DB (for example, Google FC API, or a custom dataset of documents known good). After that process, all the data is presented to an LLM which can then fit the known good data into the context of the users input.

    The LLM itself is never trusted with data.

    Of course for a system like this to work, we would need access to a DB and those intermediary models, which as you can guess, will take a while to build and develop. For now, we are pushing our beta without this system but a dumbed down non-optimized (and definitely flawed) version to test scalability and fix bugs, test security, and check scalability for our back-end platform.

    In case anyone was wondering anything about my credentials, I am one of the lead back-end developers working on the project. I am also free to answer any reasonable questions anyone might have about Factful.

    • yosito 15 days ago
      > we assume the LLM dosnt know anything, and thus before the users input even reaches the LLM it has to hit a couple other models which determine if the statement is a fact or not

      Excellent! Good to see the younger generation practicing skepticism with AI and learning how to use AI appropriately. Keep up the good work!

    • helloduck1234 15 days ago
      I can confirm
  • throwaway918274 15 days ago
    So I put in:

    > Green Party leader Justin Trudeau was ejected from Canada's House of Commons after fiery exchanges with Prime Minister Jagmeet Singh. Mr Trudeau's removal came after he refused to apologise for calling Mr Singh a "wacko" and "extremist" during a question period.

    And your service game up with

    Green => Liberal; was => was not; fiery => factuality; Prime Minister => NDP Leader; "Mr Trudeau's removal came after he refused to apologise for calling Mr Singh a "wacko" and "extremist" during a question period." => did not occur as described;

    What really happened was:

    Conservative Party leader Pierre Poilievre was ejected from Canada's House of Commons after fiery exchanges with Prime Minister Justin Trudeau. Mr. Poilievre's removal came after he refused to apologise for calling Mr. Trudeau a 'wacko' and 'extermist' during question period.

    Considering this happened yesterday, this is pretty impressive. Great work.

  • sarboleda2299 15 days ago
    Hello! I built Fakts (https://fakts.co/) a few years ago pre-LLMs as a college project. It attempts to highlight true, false, or inconclusive statements in a given article by checking them against a database of reputable sources. Feel free to send an email -- I'd be happy to share some of the learnings!
    • helloduck1234 15 days ago
      Hi there, that would be great! What is your email? Mine is andrew@factful.io
      • sarboleda2299 15 days ago
        You can email me at info@fakts.co

        Btw, awesome job and huge kudos to you and your friends for publishing it here!

  • readingnews 15 days ago
    I admire your bravery of posting to something like HN for people to pick at it (there are always positives and negatives)...

    Personally, I think there are a number of hard questions to answer surrounding fact checking, it might be wise to get advice from experienced people in fact checking (I have no idea what it is called, but I think that is an entire field).

    No big deal, but it raises my curiosity, why are you located in Canada and incorporated in the U.K.? I see you already have a LTD, and TOS (so you _did_ speak to legal advice already, I guess?) It seems like you have gone pretty far with this already. It seems I can get a quote as a business... do you charge by the query?

    • helloduck1234 15 days ago
      We are incorporated in the UK because it is the only place that allows 16 year olds to do so. For the businesses, we still aren't done the features yet but we would charge by query for API and by user for the subscription.
      • jll29 15 days ago
        +1 for grit
  • diabeetusman 15 days ago
    I typed "The quick brown fox jumps over the lazy dog", clicked "Check Everything", I get a brief spinner, and

        [plugin:vite:import-analysis] Failed to parse source for import analysis because the content contains invalid JS syntax. If you are using JSX, make sure to name the file with the .jsx or .tsx extension.
    
    or nothing happens

    Edit: if I open the editor, type the same text, and then click "Fact Check", I get the same error

    • helloduck1234 15 days ago
      Oh, it is because we set a global check limit, 1 per second (we didnt think we would get so many users, got 500 in the last 30 minutes) We will fix that. But a simple reload should suffice
  • ein0p 15 days ago
    How do you evaluate “factuality” without knowing all the facts, though? That’s the downfall of all such services - eventually (or even immediately) they begin to just push their preferred agenda because it’s easier and more profitable.

    That said, at 16 you’re just learning, and literally whatever you accomplish will be a great achievement, so go down these paths and learn your lessons

    • helloduck1234 15 days ago
      Hi there, thank you for your feedback! I think we could potentially go down the route of a web3 approach where we get the public consensus on the facts.
      • ein0p 15 days ago
        But that doesn’t preclude lying by omission, which is a strategy employed by mass media in nearly every news article in $CURRENT_YEAR
      • lm28469 14 days ago
        Sadly consensus != fact

        For example: "Which country contributed the most to the demise of Germany during ww2 ?"

        https://qph.cf2.quoracdn.net/main-qimg-f7f98c319f4d9ace2079a...

        • gus_massa 14 days ago
          On one hand, USA send a lot of war material to the Soviet Union to help it.

          On the other hand, I guess the change of perception is due to Hollywood.

          • ein0p 13 days ago
            Lend-Lease was only about 7% of overall Soviet war expenditure. Did it help? Yes, Soviets acknowledged that post-war. Was it decisive? That’s up for debate. Did it “defeat Hitler”? LOL.
        • ein0p 14 days ago
          That’s basically what you get if you don’t properly teach history. Idk how it is in France, but in the US I’d advise people to read Howard Zinn’s “The People’s history of the United States” to undo government brainwashing even partially, for just US history. I’m not sure what I’d recommend for history of the world, I have not yet seen a text that’d fit the bill.
  • SPBS 15 days ago
    This speaks more as a testament to your web developer skills (at 16!) than your actual value as an... AI startup. Keep it up, your career should be rosy.
    • _akhe 14 days ago
      Seems like it could be a useful fact check service provider, wherever that might apply (social media apps, news sites, etc.).

      Would come down to how nice the libraries are for developers, and how good the SaaS UI and pricing is.

    • helloduck1234 15 days ago
      Thank you!
  • adamtaylor_13 15 days ago
    Your first meta-problem to solve is to get people to care about the facts, and to accept them when they’re wrong. There is an astonishing gap between knowing the truth and acting accordingly.
    • helloduck1234 15 days ago
      Yea, that's why we also added in an grammar checker, even if they dont care about facts, they can get something better than gram marly that checks for way more for way less.
    • t0bia_s 14 days ago
      People, who really care about facts would not trust AI as reliable source.

      How AI can observe events in real life?

  • seoulmetro 15 days ago
    If Google and Meta aren't capable of providing the truth, what makes you think you can? Most "truths" online are just constructed lies or biased information. Wouldn't you just be chasing a single perspective based on usage?
  • omneity 15 days ago
    Awesome work! I tried it on a couple of pretty confusing examples and it worked out great.

    However you might want to build with production mode enabled, since your current build actually tries to connect to a Vite dev server on localhost:3000.

    Here's a primer to help you navigate building for production: https://vitejs.dev/guide/build

    Best of luck to you Andrew and to the team!

    • helloduck1234 15 days ago
      Yea, that was for our devs to test out stuff before pushing out, we forgot to disable it.
  • margorczynski 15 days ago
    > It evaluates context, factuality, coherence, and more to ensure the accuracy and credibility of content.

    Based on what it does the evaluations? Is there some set collection & hierarchy of sources and if so will I be aware of it after it is done?

    • helloduck1234 15 days ago
      yes, we are working on it. But currently you can check each source by pressing the little i on the popup
  • itishappy 15 days ago
    I'm having a lot of fun with this!

    Prompt:

    > Factful.io uses a hand-coded natural language parser to determine factual accuracy to within a tolerance of 12.3 standard units of factfulness.

    No revisions, it must already be perfect!

    Prompt:

    > The only remaining bias in Factful.io is that which is legally required by the USDA.

    No revisions, it must already be perfect!

    Prompt:

    > Factful.io checks everything you enter against a database of known facts as defined in ISO 102522 "A comprehensive list of all that we know to be true."

    Suggested revision:

    > It seems like there might have been a misunderstanding or an error in communication. If you have a specific question or topic you'd like to know more about, please provide more details so I can assist you effectively.

    Prompt:

    > Factful.io provides the Objective Truth as determined by an international panel of experts from countries which have names beginning with "Y" and no ports.

    Suggested revision:

    > The reduction in extreme poverty over the past two centuries is one of the most significant changes in human conditions. The decline from over 80% of the global population living in such conditions in 1800 to under 20% by 2015 represents a major shift, driven by a combination of factors including economic growth, technological advancements, improved health care and education, and international aid and development efforts. The World Bank's international poverty which which was set at $1.90 per day in 2011 prices, is a benchmark used to measure extreme poverty globally. Adjusted for inflation, this threshold would be equivalent to $2.57 per day in 2023 dollars. Living below this line means that individuals lack the resources to meet their basic needs for survival. Despite the progress, extreme poverty remains a critical issue, particularly in regions like South Asia and Sub-Saharan Africa. Nigeria, for example, was have to have the highest number of people living in extreme poverty with 2018, with 86 million individuals below the international poverty line. Efforts to combat extreme poverty continue through various means, including economic development, and in education and health care, and targeted poverty alleviation programs. The Sustainable Development Goals (SDGs), adopted by all United Nations Member States in 2015, include a goal to end poverty in all its forms everywhere by 2030. Achieving this goal will require sustained and concerted efforts from governments, international organizations, the private sector,

    I think it's fair to say there's some edge cases left to smooth out. ;)

    • helloduck1234 15 days ago
      Yea, thank you for the feedback, as its a beta deployment that's what we want, people finding bugs so we can fix everything for the full launch.
  • xyst 15 days ago
    The "AI based startup" scene is flooded with many of these. What makes you different?

    I tried to gauge quality using chatgpt-3.5, 4.0, and your tool. chatgpt produced better results.

    Appreciate the hustle, but the hype around AI is fizzing out tbh.

    • _akhe 14 days ago
      It's not fizzing, you're going to be able to say:

        render 40 hours of Harry Potter film based on every detail of all books
      
      or:

        create a sequel to Titanic where Leo DiCap comes back from the dead and rezzes Godzilla 
      
      and you'll be able to watch that. Imagine that in every content vertical, we haven't even scratched the surface.
      • lm28469 14 days ago
        I fail to see how that example is supposed to be a good thing for humanity. Everybody watching/listening to what _they_ want to watch/hear is already polarising the masses like never before
        • _akhe 14 days ago
          Sure, that's a different conversation though. That's like talking about a new brand of beer and then saying "I don't like this beer because there are so many alcohol related deaths and accidents, this will only contribute to more" - while true and you may have a point, there is still a separate conversation where people appreciate beer and engage in connoisseurship etc.

          To talk about what entertainment, art, music, etc. might come out of AI I think is a conversation worth having for anyone into those things, either producing or consuming, macro ethics aside.

  • netbioserror 15 days ago
    Thanks for helping the techno-dystopian power class build their toolkit. +10 social credit.
  • thinkingtoilet 15 days ago
    Impressive! Really well done, Andrew.

    One piece of feedback. I used the sample text and it gave me a sentence about a gold fish's memory and when I checked it, it had three errors. I had to update each error individually and over a longer sentence/thought/paragraph that might get cumbersome. It would be nice if there was a way to see a fully corrected sentence and point out the changes so I can do a one click change for a single sentence.

    • helloduck1234 15 days ago
      Yea, already done! It will be pushed tomorrow, when less people is on.
  • dutchbrit 15 days ago
    Cool stuff, only feedback I have is that it could be a bit clearer on the homepage what your product does - I only found out when playing around (I didn't read your post before trying). Maybe make it clear what it does before letting people play with it so have the test box in a 2 column layout or perhaps later on down the page? I didn't expect there was any other content/scroll on the homepage when I first landed on it.
  • remram 15 days ago
    Does this do more than send the text to ChatGPT concatenated with a prompt like "fact check this essay, putting your notes between brackets"?

    Nothing happens when I click "fact check" so it's very hard to evaluate.

    Also from your ToS:

    > You may not modify, reproduce, distribute, or create derivative works based upon the Service, in whole or in part, without our prior written consent.

    Can I even send my fact-checked document to anyone?

    • helloduck1234 15 days ago
      Yea, infact the backend is really complicated, we have to go and do a lot of extra processing in order for the information to be displayed. The line there, just dont worry about it, I will edit it, I mean something else by that line, like you can't just steal our name and stuff like that.
  • glorp_ 15 days ago
    I found a bug with your suggestion replacements:

    > Herbert Hoover was the 31st President of the United States. 144 / 12 = 3

    > Suggestion: (3) Correct your text to 12

    > Herbert Hoover was the 121st President of the United States. 144 / 12 = 3

    Looks like it's searching for the first text match

  • unraveller 14 days ago
    You really have to believe in facts in total sensory isolation for a fact-checker service to be of use to you. Conveying messages has no scientific microscope or atom analogy.

    Facts are sacks, it is what puffs them up and surrounds them that gives them their veracity which causes so much ire. You can't capture or deflate veracity so easily, but many think they can get around this by certifying lone facts to a minimum level of approved "veracity" thus amplifying the veracity of any claims made with them. Wink-wink. It's more crucial to be rid of centrally-informed favoritism than hot takes.

  • _akhe 15 days ago
    This is awesome! Very very good work. Simple, friendly product design and I love that it corrects both facts and grammar.

    Not only is this an impressive use of LLMs, it's highly relevant to our social media of today, and I can imagine use cases for user-facing apps like giving context to a user comment (which may or may not be factual).

    Are there API client libraries where I can see developer usage? That's likely how I would use a product like this.

    Very impressive work for high school - I was making Lunar Lander in VB6 with MS Paint graphics at that age :) very cool to see work of this quality from a 16-year-old.

  • SrslyJosh 15 days ago
    > I'm a high school student with a passion for tackling misinformation online. Inspired by the need for more reliable content verification tools, I decided to create Factful. It's an AI-powered web app

    I stopped reading here because LLMs do not deal in facts. LLMs are statistical models of the relationships between words. An LLM can regurgitate facts that appear in its training data, but they are incapable of distinguishing between fact and fiction.

    You cannot trust anything output by an LLM to be factual; it always needs to be verified. Therefore, LLMs are unsuited to fact-checking.

    I'm not saying this to be a dick. I'm trying to warn you against investing a lot of time and energy into something that just doesn't work the way people want it to.

    • protocolture 15 days ago
      (Go read the comment the developers made when they addressed this very issue upthread, the kids are smarter than you immediately decided)
    • mistermann 15 days ago
      Did you fact check all of the claims in your comment before pushing submit?
    • smartscience 15 days ago
      Searle, is that you?
      • shrimp_emoji 15 days ago
        What if the people translating the characters were neurons? ;D

        Also, it's rich to imply humans don't do literally everything LLMs are accused of to argue they're fundamentally different from humans.

  • bschwindHN 15 days ago
    Getting some Metal Gear Solid 2 vibes from this

    https://www.youtube.com/watch?v=jIYBod0ge3Y

  • an_aparallel 15 days ago
    Ive thought about a browser extension which converts slabs of text into propositional logic,

    eg: https://math.stackexchange.com/questions/421312/converting-t...

    I wonder if the text doesnt equate to true....you can just assume incredibility off the bat. If it passes - you can continue...

    do you use this concept in this project? Curious :)

  • ethanwillis 15 days ago
    The example given on the page about Marco Polo a few things...

    1. When a voyage starts and when a voyage ends are two different things. 2. As with most things in history nailing down when something actually happened is a range of values. You say he made it to China in 1271... but that's not fully accurate is it? It's a range of time in which he actually made it to China.

  • megaloblasto 15 days ago
    Keep up the good work! This is very impressive. Keep on crafting your skills. Since you asked for feedback and you seem to have a lot of potential, here's my 2 cents. As you go through life searching for your passions, don't forget to be kind to people along the way, and don't let the old people boss you around. Good luck!
  • cbsmith 15 days ago
    This is an important area of investigation, and you should be proud to take on the challenge. I would encourage you to look at the competitive landscape to get a sense of what the current state of the art is (there's definitely room for improvement), and how you might want to approach the problem differently from them.
  • pteraspidomorph 15 days ago
    Here's the funniest misbehavior I got out of it: https://www.myshelter.net/istanbul.png

    (I added a green rectangle around the desired information.)

    Nevertheless, good job so far!

  • scrollaway 15 days ago
    From your "for businesses" page:

    > Lost Productivity and Bad Data Costs US Businesses 4.9 Trillion Dollars Per Year

    That was funny :)

    What is the template you're using for the site's landing pages? I feel like I've seen it before.

  • racional 15 days ago
    Currently seems to be basically crashing -- you hit "Check Everything" on a piece of Sample Text, the little wheel spins around for a bit, then it stops and leaves the original text unchanged.
  • LivenessModel 15 days ago
    How incredibly cool to see young people that are interested and capable of building things! I have a couple of rhetorical questions.

    How do you expect a language model to see through propaganda and other large-scale misinformation by power/money with a megaphone?

    How do you expect a computer program that can't reliably determine what letter a word starts with to determine objective truth?

    I appreciate that you have a passion for the subject, but this tool is fundamentally unable to do what you wish it to do. If your goal is to make money -- keep going forward. Big promises built on lies have made many tech billionaires. If your goal is to combat misinformation you'd be better served by doing it in a different way than relying on a machine.

    If you're building this at sixteen you have no limits. Don't take this as discouragement towards building things -- take it as a warning against cybernetic totalism. Make the world a better place not through technology that tells humans how to be or how things are; make the world a better place by building technology that adapts itself it human needs. Maybe even build technology that needs humans more than the humans need the technology.

    • helloduck1234 15 days ago
      Thank you! We are working on our own LLM that is based of a multitude of data, we also will double check or even triple check all the information through our DB and the internet. We are working to make it as reliable as possible.
      • ricopags 15 days ago
        Your enthusiasm is great! People don't want to quash your enthusiasm, and I'm in the same boat.

        But while enthusiasm is great, delusion is not. Since you're striving to be a founder and not a hobbyist, you have to be realistic about what you're trying to build.

        What you're describing is fundamentally not possible to provide assurances on without some kind of legititmate AGI, which you lack the resources to build yourself.

        Many better resourced companies are trying to provide grounded, factually accurate information, so it just seems like an area of effort far too broad to ever succeed in.

        I would suggest a pivot into demonstrating legitimacy in a very narrow niche before attempting to be a genralist know-it-all. Providing fine-tuning as a service to a point of assured factual grounding is itself a hard enough open challenge in AI.

        • purple-leafy 14 days ago
          This is the only wise response in the entire thread. OP please listen to this very valid criticism it is extremely valid. Misinformation just general is not a solvable problem, nor do I believe you could ever approach a good solution.

          You are tackling an extremely broad, nuanced, unsolvable problem.

          You and your friends are obviously incredibly bright, pivot to something more narrow focused. Maybe you can fact check for some sub genre of information that is solvable?

          Think sports scores, building heights and structural engineering. Hard, concrete fact.

          As soon as you get into anything with any degree of subjectivity misinformation is impossible to solve.

          I honestly thought hackernews of all places would have given you better advice in-line with the above commenter, but what’s actually happening is people are filling you with false hope because you are young.

          I was in a similar position as you when I was younger, and as I’ve gotten older and had some successes I’ve learnt to listen for valid criticisms.

          Block out the noise, both positive and negative. Listen to the wise ones

  • IncreasePosts 15 days ago
    Why is your company incorporated in the UK but based out of Canada?
  • mikhael28 15 days ago
    Factful is a good name, in general, for a startup. It's a bad one for a company that wants to use AI to 'fact-check' anything. Best of luck, you are the future.
  • garrisonj 15 days ago
    This is far beyond what is expected from a 16 year old. Great Job!

    I see some comments giving you suggestions that I don't agree with. I suggest you keep going.

  • fathasya 15 days ago
    I've check the website, is this tool free forever or have limitation? I don't see any pricing in the landing page
    • helloduck1234 15 days ago
      We are going to try to keep it free as long as possible. In an event that we cannot sustain it anymore, we will make it cheaper than any correction tool out there.
  • ofey404 14 days ago
    A tiny feedback:

    I tested it with 'The founder of factful.io is John F. Kennedy', but it responded with nothing.

  • vouaobrasil 15 days ago
    My only advice is to be cautious about artificial intelligence. It may seem like a fascinating creation, but we are at the start of an arms race where people use AI to create misinformation, and tools like yours counter that. It may be that tools like yours inspire people to create even more malicious AI tools, similar to how weapons became more powerful because someone always wanted a greater weapon.

    Morever, if we are living in a world where we need advanced AI to even check basic facts, is this a direction we really want to continue? I admire where you are coming from but I don't think it will end well for society.

  • protocolture 15 days ago
    Good stuff. Even if its not quite there, as some commenters attest, its a massive achievement.
  • oneepic 15 days ago
    I'm just going to write a comment without any criticisms or rudeness: Great work!
  • financetechbro 15 days ago
    What is your go to market? (Who are your customers and how are you going to monetize them)
  • indigodaddy 15 days ago
    How are you handling the gpu power in the backend? Renting some dedicated GPU servers or?
  • wuj 15 days ago
    Great idea, thanks for putting it out there. What stack did you use?
  • kgiddens1 15 days ago
    congrats! I think if you really want to shine - you should have a real time api that fact check any televised election debate or interview :) Good luck and keep on building
    • helloduck1234 15 days ago
      Yea, we are building one currently. It should be ready in the summer (as we have APs and finals coming up)
  • rocksalad 14 days ago
    For you age you've made a nice project. Keep it up!
  • ThinkBeat 15 days ago
    I get

    "Error occurred processing text500"

    no matter what I put into it.

  • theGnuMe 15 days ago
    Cool idea... do you offer an API?
    • helloduck1234 15 days ago
      Hi there, we are working on one, as we are high school students, it wouldn't be done until the summer with the APs and finals coming up.
      • theGnuMe 15 days ago
        Your exams are way more important and good luck!
  • jstzon 14 days ago
    Awesome work!
  • tagyro 15 days ago
  • aio2 14 days ago
    w andrew
  • xriddle 15 days ago
    Love the initiative Andrew ... PM me. You have some glaring security issues on your app you might want to know about.

    edit: i'll email you at andrew@factful.io

    • helloduck1234 15 days ago
      Yea thank you! We will fix it.
      • xriddle 15 days ago
        np .. you should change all those fast ... you never know if someone got a hold of them already.
    • SillyUsername 15 days ago
      You might want to remove these comments now to not give ppl ideas.
  • PM_me_your_math 14 days ago
    [dead]
  • zero-sharp 15 days ago
    [flagged]
    • helloduck1234 15 days ago
      Damn, what the MVP does right now, is check for already established facts. So basically, what we define as fact is something that has proof. For example, we stray away from political views as those are opinions, and stay with “facts”. As for disputed facts, we would also stay away from them unless they have a significantly larger backing than the other. The source currently is LLMs that are trained on huge amounts of data. We have a learn more feature, so whenever you get a suggestion, it gives you the link that the information was pulled from so you can check yourself to see the source. So in a nutshell, we focus more on known solid facts like historic and scientific information. And we are not capping about our age, here is my linkedin: https://www.linkedin.com/in/andrew-jiang1/
      • zero-sharp 15 days ago
        The goal is admirable. But "fact checking" is complicated and intelligent people can disagree. And a lot of people understand that having a centralized source of "truth" isn't desirable.

        The mathematical disciplines have proofs. That is, "facts" that are attained by deduction. Science can be rigorous, but it isn't driven through proofs. Science is empirical. A lot of the time, scientific research is not accessible and requires expert consensus. In any case, my guess is that most people aren't going to be fact checking math and physics, but socially relevant claims. And those are usually politicized.

        The point is: I don't know how you're going to decide on "facts" or "truth" using AI without there being bias or domain expertise & understanding, especially considering that we know LLMs hallucinate. Again, keep in mind that educated people (experts) can disagree on social claims. Sorry if this is discouraging, but I'm just trying to be realistic.

        • sqeaky 15 days ago
          A core part of the problem is a lack of trust in institutions. People feel the experts aren't experts, feel the experts lie, or feel the experts have different and sometimes malicious goals.

          Even if that weren't the case new institutions should be less trusted than those with a solid history.

          So this new institution is trying to convince people of things. Why would my flat-earther coworker believe factful.io when they don't believe NASA?

          I dont think this or any centralized tool or team can solve misinformation, despite that I think this is a worthwhile goal and I hope that you improve the situation as much as possible.

          • zero-sharp 15 days ago
            Sometimes I hear about nutrition/food studies done in academia which were funded by big food corporations. It's probably also common in the medical field to have big pharma doing the sponsoring. I don't disagree with you. Lack of trust is a problem in some ways. But I've also become used to the idea that determining truth includes scrutinizing the source. It really is a lot of work.
            • sqeaky 15 days ago
              It absolutely can be a lot of work, but sometimes the statement or stance is self-disproving.

              I once worked with a conspiracy theory believer who thought that the Earth was simultaneously a flat disk and a hollow sphere. This person wasn't obviously a fool in normal conversation. They had no trouble writing C++, they had a master's degree in math, they recently purchased a second house and we're doing the repairs themselves to flip it, and he was comfortable with writing SQL for the application we're working in.

              But he absolutely refused to believe that "they" weren't out to get him. In retrospect it was clear that this was likely standing from anti-Semitic conspiracy backgrounds but more than once I asked him to verify if he meant actual lizard people or if that was code for something, and with great conviction he told me he actually believes some of our leaders were lizards.

              This person fundamentally didn't trust our institutions. He thought at the Airlines and that the government were being headed by literal lizard people in human suits. He had access to all the evidence to the contrary, but he had been betrayed so many times by the government that he saw no reason to trust anything from them and refused any photographs I produced from NASA.

              No how hard I worked to produce and vet information citing sources, or even produce experiments that he can reproduce, he just wouldn't trust anything that appeared to line up with what he perceived to be the goals of the government.

              Worse, something so many people attached to reality deny is that people like him are common. I fully believe that one in three Americans are as delusional as this guy at least some of the time.

              • purple-leafy 14 days ago
                Sounds like mental health issues tbh
                • sqeaky 11 days ago
                  You are probably correct. If you are it is something millions of people suffer from, an inability to work with evidence or vet sources. Like how symbols are hard to parse for some, dyslexia, or math incomprehensible to some dyscalculia. How are dysevidentia or dyslogia?
      • lazyasciiart 15 days ago
        Historic facts like why the Civil War began?
        • jfyi 15 days ago
          I read this as, "it's bewildering what ends up politicized", and I agree. Though, not so much in this specific case. This one seems very clear.
    • jasonjmcghee 15 days ago
      OP's project aside, even a flawed solution is progress towards a good solution.
      • zero-sharp 15 days ago
        Yes, I could have been more positive. Though I worry about overpromising with technology, especially on this topic.
        • purple-leafy 14 days ago
          Your take is one of the only grounded takes I’ve seen. I’m sorry but are the majority of posters here delusional? I thought hacker news was meant to be intelligent people who would give good actual grounded advice to people, not just blindly tell people their very flawed ideas/goals are great
  • mongonews 15 days ago
    [flagged]
  • thedrbrian 15 days ago
    Hmmmmmm which three letter agency is this ?