Bard can now connect to your Google Apps and services

(blog.google)

187 points | by meetpateltech 399 days ago

24 comments

  • typpo 399 days ago
    "Extensions" and integration into the rest of the Google ecosystem could be how Bard wins at the end of the day. There are many tasks where I'd prefer an integration with my email/docs over a slightly smarter LLM. Unlike ChatGPT plugins, Google has the luxury of finetuning its model for each of their integrations.

    The new feature for enriching outputs with citations from Google Search is also pretty cool.

    • jsight 399 days ago
      Yes, exactly. Integration is where the real power of these agents can live.

      I really want an agent that can help me with pretty simple tasks - Hey agent, remember this link and that it is about hyper fast, solar powered, vine ripened, retroencabulators. - Hey agent, remember that me and Bob Retal talked about stories JIRA-42 and JIRA-72 and we agreed to take actions XYZ - Hey agent, schedule a zoom meeting with Joe in the afternoon next Tuesday. - Hey agent, what did I discuss with Bob last week?

      Something with retrieval and functional capability could easily end up being easier to use than the actual UIs that are capable of doing these kinds of things now.

      • dustymcp 399 days ago
        oh so a working google assistant? :)
        • jsight 399 days ago
          Yes, basically. I'm shocked at how bad that is compared to what it could be. If anything, it has gotten worse since introduction.
        • htrp 398 days ago
          I think they tuned down the model because TPUs are needed for actual AI work
      • singularity2001 397 days ago
        Having to explicitly say "remember that" is so anachronistic. Of cause computers (should) remember everything.
    • pphysch 399 days ago
      No doubt about it. Google isn't competing directly with ChatGPT, but is betting that having a small fine-tuned model "close to the data" will dramatically cost-outperform a huge general-purpose LLM. Less resource-intensive interference, less prompt engineering (less noise).
    • tlogan 399 days ago
      Yes - but the big question is this: how to monetize this? This might completly kill search and ads.
      • jmole 399 days ago
        Workspace is already monetized
      • plaidfuji 399 days ago
        It’s a competitive differentiator of workspace vs Office that will help retain existing users and maybe some day cause large enterprises to think more about switching.
      • topicseed 399 days ago
        By showing ads... They'll figure out units that look native and history will repeat itself.
  • eminence32 399 days ago
    For someone already invested in the Google suite of products (gmail, docs, etc), this sounds pretty useful.

    Also, this part seems especially interesting:

    > Starting today with responses in English, you can use Bard’s “Google it” button to more easily double-check its answers. When you click on the “G” icon, Bard will read the response and evaluate whether there is content across the web to substantiate it. When a statement can be evaluated, you can click the highlighted phrases and learn more about supporting or contradicting information found by Search.

    The biggest problem with all LLMs at the moment is the frequency at which they are wrong (at least when they are used like an internet search to lookup factual info). Any LLM that can improve this (or as in Bard's case, make it easier to detect wrong info) is likely to gain traction.

    • candiddevmike 399 days ago
      So we can validate Bard using blogspam SEO articles, eventually having those written by Bard or ChatGPT? It's a LLM ouroboros
      • vitiral 399 days ago
        * the snake eating it's own tail

        This feedback loop, when you extend LLM to the horizon, is my primary point against this approach. When 90% of the new training data is from content (a worse version of) it previously generated you get a negative feedback loop to zero quality.

    • freedomben 399 days ago
      Bard is especially bad right now, at least compared to gpt4. Once they move to Gemini it will be interesting to compare, but until then things like "when does the new Staind album come out" have to be answered the old way with Google search and reading.
      • eminence32 399 days ago
        I'm sure others will feel differently (and I'm very eager to hear from others with different views), but for me there's not that much difference in usefulness between a model that's wrong 5% of the time and one that's wrong 25% of the time. Both models require manual validation 100% of the time.
        • BoxFour 399 days ago
          I certainly feel different. A 5% error rate, to me, is akin to branch prediction, where I occasionally need to rollback but it's infrequent enough that it doesn't pose a significant issue.

          However, when it reaches 25%, I find myself meticulously verifying everything beforehand.

          • eminence32 399 days ago
            Do you feel confident in your ability to detect when the LLM is wrong? For me personally, I don't have the confidence to do this , which is why I feel like I need to verify everything, even in the 5% case.
            • freedomben 399 days ago
              For me it comes down to "how important is it to be right"? For many of the queries, it's not that important and if I lose the 5% gamble, it's annoying but ultimately inconsequencial. If it's an important thing to be right about, then I will verify it from both.
              • delecti 399 days ago
                I don't know how else to ask this: how are you so okay with disregarding accuracy?

                What questions are you asking where you don't care if the answer is wrong? I guess I just fundamentally don't understand what the point is. Why not just bookmark the "random article" link on Wikipedia if it doesn't matter anyway?

                • BoxFour 398 days ago
                  I encounter situations where I used to rely on Stack Overflow, which carries a similar (if not higher) likelihood of being incorrect, often due to outdated information.

                  For instance, I was recently inquiring about a specific task with CMake and consulted ChatGPT. Initially, the response was inaccurate, but it was obviously so when it didn’t compile. Upon reprompting, I received the correct answer.

                • sdenton4 399 days ago
                  A significant subset of my questions are basically, 'here's the weird statistics thing I would like to do; what are the academic jargon keywords I should look into?'

                  All of the LLMs are pretty good at this, and when it hallucinates a response it's totally a decent indicator that there isn't a lot of literature on the subject.

                • freedomben 399 days ago
                  I ask a fair number of code questions where bugs and stuff will be caught either immediately or shortly after first use. If 5% of those are wrong, totally fine with me.

                  I also sometimes ask questions like "what is the tallest mountain in the US" or "what is the hottest desert on Earth" or similar. If I really need to know that the answer is correct, at a minimum it gives me a name to search for to verify height in feet compared to others, etc.

                • og_kalu 399 days ago
                  You have to be okay with being wrong regardless. The little snippets after a Google search are sometimes wrong. The blogs or links can be misinformed (For example, most seem to be wrong about Staind's new album release date).

                  The idea of perfection is silly because it doesn't exist LLM or not. You're not going to get it so it's a matter how often it's right.

            • jsight 399 days ago
              If it is writing code, I still have to verify it every time. Then again, that is also true when I write code.

              Hopefully my rate of error is way below 25%, so 25% error might not help me that much.

              I'd like to think my rate of error is less than 5% too, but that might be close enough that outsourcing the work is still worth it.

              Maybe still not worth it if I were that hypothetical programmer that writes lots of code but never writes any bugs.

      • xnx 399 days ago
        This is what Bard told me: "Staind's new album, Confessions of the Fallen, was released on September 15, 2023." If that's wrong, it's because I also see web pages that have it wrong.
        • freedomben 399 days ago
          Yep it's wrong, it's coming out on September 22. I don't doubt that some web pages have it wrong though, and it's hard to fault Bard for that.
          • vel0city 399 days ago
            Yeah, hard to blame Bard for getting that wrong when there's tons of webpages that seem semi-legit that have September 15th. Searching confessions of the fallen "September 15" 2023 yields hundreds of thousands of results which at least for a cached version has that date.

            https://allmusicmagazine.com/staind-release-new-single-in-th...

            Comparatively, ChatGPT says "I'm sorry, but I do not have access to real-time information, and my knowledge only goes up until September 2021. To find out the release date of a new Staind album, I recommend checking the official Staind website, social media profiles, or reputable music news sources for the most up-to-date information."

            So which is more useful, one that doesn't even know there is a new album coming out, or one that knows what was its release date as of just a couple of months ago?

            • trop 399 days ago
              > So which is more useful, one that doesn't even know there is a new album coming out, or one that knows what was its release date as of just a couple of months ago?

              To semi-misquote Lewis Carroll: Which is better, a stopped clock or a clock which loses a minute a day? Carroll posits the former, as it is precisely correct twice a day. The trick, of course, is knowing for sure when those two times per day will be.

      • jvolkman 399 days ago
        I just asked it your question and it seemed to give the correct answer? Also, there's now a built-in button to check its answer against an actual Google search and the UX is pretty slick.
        • freedomben 399 days ago
          Did it say September 15 or 18? (both answers I've gotten from it over the past month or two). If so, that's wrong. It's September 22. As sibling pointed out though, there are apparently web pages that have it wrong so it's probably not Bard's fault. I also wouldn't be shocked if the date changed and was originally 9/15
    • hdidvdksj 399 days ago
      [dead]
  • falcor84 399 days ago
    Very tangential, but I found this funny:

    > For example, if you’re planning a trip to the Grand Canyon (a project that takes up many tabs)...

    Is number of tabs considered a reasonable estimate of a project's size/complexity/scope/duration these days? If so, I'm wondering whether we could start using it instead of story points?

    • freedomben 399 days ago
      Number of tabs seems like a pretty good measurement in my cases! Much better than completely arbitrary story points anyway...
  • modeless 399 days ago
    Now this is actually useful. There's a lot of good information in my Gmail but searching it is such a pain that I hardly ever do.

    I just asked Bard for the date of an upcoming event and it did the search for me and found the right answer and summarized it with extra detail and references. This is the only reason so far that I'd go to Bard over ChatGPT.

    It did treat the @Gmail part as part of the query words though, which is weird. I think it won't be ready for mass consumption until it can decide for itself when to search Gmail or Drive with no weird keywords necessary.

    • XTHK 399 days ago
      Remember that mentions on Twitter and tags on Facebook are called upon with @. I think the masses are quite ready indeed.
  • mark_l_watson 399 days ago
    I just used it for 15 minutes and I like the direction Google is going with it. Once you turn Bard extensions on, you use the “@“ character to get a pop up list of services like GMail, Google Drive, and many others; choose one, then ask your question.

    First steps, and I look forward to seeing future improvements. I wonder how they will monetize this? I was just using it with my free GMail account.

    Both Microsoft, with Office 365, and Google have the customers and web properties that can make good use of new types of LLM applications.

    • topicseed 399 days ago
      Extensions don't seem to work with business Workspace accounts as of yet.
    • XTHK 399 days ago
      >How will they monetize this

      Public LLMs like Bard generate massive amounts of marketing data.

  • behnamoh 399 days ago
    I still think chat is not all natural for humans to interact with computers, mainly because most people are not actually good at phrasing their needs and even if they are, typing it for the LLM to understand takes so much time.

    Thanks to QWERTY keyboards, our keyboards are not efficient for typing either.

    • skybrian 399 days ago
      Counterpoint: Google’s search engine is pretty successful? This suggests that people can use a freeform text box as input.

      Typing can be augmented with autocomplete. Some people like using a microphone, though.

      • XTHK 399 days ago
        I think when Bard reaches full integration with Google products and it is fully unleashed upon android phones with voice control, it will be a watershed moment.
    • fsiefken 399 days ago
      Yes, I agree it should be faster, 100ms between question and concise answers. With standard phrasing of needs and wants like Cucumber or Non-Violent Communication and a Dvorak keyboard layout or a shorthand input system (yash or bref) you can go a long way. For standard stuff you could always drop to short ruby like syntax
    • Closi 399 days ago
      Depends what you are trying to do with a computer.

      Natural language is a pretty brilliant way to interact with a computer if you want to learn a foreign language, but it’s probably not great if you are wanting to play a first person shooter game.

      One thing is clear though: it’s not one size fits all.

    • mholm 399 days ago
      The existing paradigm is temporary. AI will improve to infer more meaning from what users want, increase their ability to perform it, and chat systems will eventually switch to optional voice input. You tell the system what to do, and it'll perform whatever steps it needs to. I can't imagine anything more natural than that, for the everyday user.

      I'm sure things will get more esoteric, for the experienced computer user.

      • afavour 399 days ago
        I don’t mean to be mean but this is an example of what bugs me about the AI hype cycle.

        Person has a valid criticism, the answer is a vague, wishy washy “it’ll improve and be able to provide all things to all people in exactly the way they need it”

        I don’t know what I’m supposed to do with that statement. What are the concrete steps from here to there? What’s the timeline?

        • mholm 399 days ago
          Looking at the difference from GPT-3.5 to GPT-4, you can see these features are already appearing. GPT-4 can do more, understand more types of data, infer more context, and craft more advanced responses. With the right prompting, it can even ask clarifying questions to handle ambiguity. Most of the time when people talk about the limitations of AI, they're talking about limitations that are _very_ close to being improved upon. Context windows are a weakness, but they're growing, and memory features are being developed. Intelligence is improving. Understood data types are expanding. My guess is that GPT-5 (and equivalents) will understand voice natively, instead of needing a speech to text model in the middle.

          There are absolutely serious limitations to existing AI, but the criticisms mentioned here aren't where we're stuck.

        • og_kalu 399 days ago
          There's nothing he says that isn't already here though. Bing, Bard all have voice input. and even the technology for more native speech abilities (so not just STT) is already here - https://google-research.github.io/seanet/audiopalm/examples/

          They're already able to infer a great deal from context. They will improve simply because we've not hit any scaling walls yet.

      • behnamoh 399 days ago
        Yes, being able to voice chat with AI is interesting, and we can already do that (e.g., using voice-to-text).

        > AI will improve to infer more meaning from what users want

        This is true. It already knows what it should do when I give it a piece of code and say "" (the bug emoji) or "spot the bug". If you give it the error code, it already knows it has to find a solution.

        But for non-programming tasks, it'll be a challenge to infer what the user wants because language is vague and people are used to speaking ambiguously and clarifying only when the other person doesn't understand.

  • getpost 399 days ago
    This morning I asked it if it could query YouTube and I got short description of the YouTube API. I don't know whether the API details have been public.

    I then used it to search for videos,

    > There are several YouTube videos with a title that begins, "Gregory Blackstock, The Incomplete Historical World," List those videos with the YouTube URLs.

    In this particular instance, the titles aren't unique, but the videos are different. The result appeared to be correct. The list was formatted to I can paste it into an email, something that doesn't happen when I search YouTube itself. Nice!

    My next prompt was

    > Make a list of the top 5 most viewed YouTube videos about moths.

    Strangely, the result was 3 groups of 5 videos, with some duplicate results.

    > Why did you list 15 videos in your reply when I asked for 5?

    Reply: "I apologize for the confusion. I am still under development, and I am always learning.

    I listed 15 videos because I found it difficult to narrow down the list to just 5. There are many beautiful and fascinating moth videos on YouTube, and I wanted to share as many as I could.

    I will try to do better next time. If you have any feedback on how I can improve my responses, please let me know."

  • iandanforth 399 days ago
    I'm a bit concerned about one of the privacy terms. When you enable the connection to Google Workspace you are told:

    To complete your requests, Bard will:

    - Access and manage items from Google Workspace, like your emails and documents

    - Share parts of your conversation, and other relevant info, with Google Workspace, which may be used to improve its services

    That second line means that your conversation could be sent to humans for review, just by submitting a query that invokes gmail, drive, etc. This is in direct contrast to the terms of Bard itself which won't store, use for training, or allow human review of conversations unless you explicitly submit feedback about said conversations.

    This amounts to "If you turn on Google Workspace any of your conversations could be seen by humans."

    • mark_l_watson 399 days ago
      I interpreted their privacy statements for Bard extensions differently. I thought they clearly stated that no humans would read your content and your queries would not be used for training.

      EDIT: someone else here mentioned that Bard results may be viewed by Google developers, so I might be wrong in my statement.

  • virgildotcodes 399 days ago
    Just tried it for a few minutes with all extensions enabled and it failed at almost every task I proposed.

    How many emails are in my gmail? (Completely incapable of answering this, didn't even try, just listed my last ~5 emails)

    Based on my gmail account, how many flights have I booked in the last two years? (Also completely incapable, and didn't try, and again listed those same last ~5 emails)

    When is the cheapest one way flight from NYC to Bali over the next six months? (Was for some weird reason only capable of returning round trip flights, but it did at least give me a plausible date and list of flights)

    I'm probably using it wrong, but not a super "wow" first impression.

    • Game_Ender 398 days ago
      I think you need to give tasks that enable it to lookup and process specific documents. I am not sure they designed it for data mining tasks. And it’s probably not as smart as GPT4 which might break it down into steps like “search for possible flight confirmations” and “validate results”.
    • esotericimpl 399 days ago
      [dead]
  • tw04 399 days ago
    This seems incredibly dangerous. You're one typo away from having the entirety of your private life exposed by bard.

    And based on their extremely loose privacy policy, I can just imagine Google pitching this to advertisers for "targeted marketing". "Hey Bard, give me an email template to manipulate tw04 into buying my product".

    Also, I find it more than a bit disingenuous that the privacy policy on bard.google.com links to their generic privacy policy, not their BARD privacy policy. And after reading the real one, I understand why:

    https://support.google.com/bard/answer/13594961?hl=en#your_d...

    They will use all of your private data for advertising, and a human will review the data fed into bard. In other words, all of your private information is now reviewed by a human as they see fit. Yuck.

    >Please don’t enter confidential information in your Bard conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

    Maybe put that one front and center on your bard page, not buried on a completely different website....

    • afavour 399 days ago
      > All of your private information is now reviewed by a human

      FTA:

      > If you choose to use the Workspace extensions, your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model

      I’m as big of an AI skeptic as anyone but even I believe you can set up an integration of personal information that doesn’t leak publicly.

      • tw04 399 days ago
        >I’m as big of an AI skeptic as anyone but even I believe you can set up an integration of personal information that doesn’t leak publicly.

        I've been using google products long enough to know that when a blog post differs from the policy language, the blog post always loses. Let me know when they update the actual privacy policy to reflect the blog post.

        • summerlight 399 days ago
          If you want to earn some credibility then you could refer the exact privacy statements that conflicts with this blog post.
    • jvolkman 399 days ago
      In my experience, there's a message with every new Bard conversation that links to the page you linked to and says "Human reviewers may process your Bard conversations for quality purposes. Don't enter sensitive info."
      • panarky 399 days ago
        "Bard conversation" can be different from "private information in Drive and Gmail".
  • melx 399 days ago
    Someone asked[0] what happened to Bard just 10 hours ago (or 8 hours if relative to this posting).

    [0] https://news.ycombinator.com/item?id=37564768

    • behnamoh 399 days ago
      Yup, maybe Google saw my post and this is their response!
      • melx 399 days ago
        I was thinking...maybe Bard read your post and decided for itself.
  • CTDOCodebases 399 days ago
    Bard is so terrible. I can’t imagine anyone is actually using it based on my experience.
    • gman83 399 days ago
      In my experience it's better than GPT3.5, not as good as GPT4.
    • jeffbee 399 days ago
      I have the integration with google search enabled and now I only use GPT very rarely. Having it at the top of the SERP with zero effort is much more convenient. The fact that it gets many things wrong becomes irrelevant when it takes no effort on my part. Even when it is totally wrong, its feature to intersperse the output with links to its sources is quite nice.
      • 7ewis 399 days ago
        Are you in the US? I'm in the UK and have had an ad for it in the Google app for a while, but it always errors when I tap it on my phone.

        Believe it's US only which is why it won't work, but would love to give it a go. Did try a VPN but no luck still!

      • fsiefken 399 days ago
        I'm curious, is it better then Bing Chat here (not comparing the layout)?
        • jeffbee 399 days ago
          What I am saying is the layout is the benefit. On Bing I have to switch over to Chat and wait.
    • labrador 398 days ago
      Have you tried the latest upgrade dropped 2 days ago? It's much better.
    • bobsmith432 399 days ago
      Filter is too strong to be funny, I know you can answer it Mr. AI, but some skinny 20-something year old told you not to in the name of "ethics" (I only use AI for my amusement)
  • mayop100 398 days ago
    Bard's integration with Gmail is extremely basic -- it's just using regular Gmail search to include a few relevant emails in its prompt and then trying to answer your question. There's no AI on the email side.

    We launched a much more capable Gmail + AI assistant this morning here: https://news.ycombinator.com/item?id=37585990#37586627

    We're using embeddings + vector DB + x-encoder + GPT4 to deliver a much smarter & capable assistant.

  • alphabetting 399 days ago
    Getting accurate results about half the time searching my own content. Probably harder for me because I have over hundreds of thousands of messages in gmail and tens of thousands of files in my drive.

    example of successfully retrieving a PDF: https://i.imgur.com/Y6cSlCx.png

  • saberience 399 days ago
    This makes me want to use it less, if anything. I just simply don't trust Google anymore.
  • browningstreet 399 days ago
    For summary extractions I’ve found Bard to be miles better than ChatGPT of late. Waiting for some of my dependent services to add api support for Bard… I’m guessing this will go back and forth a few times over the coming years.
  • kkarpkkarp 399 days ago
    I asked for flights from Poland to Cancun, Mexico in May 2024. It told me I can with Wizzair (the operator operating in Europe only) with stopover in Kyiv, Ukraine (lol).

    Maybe it can connect to Google Apps but can't give reliable results.

    • dymk 399 days ago
      Maybe it knows something we don't that'll happen in early 2024?
  • Decabytes 399 days ago
    Still waiting for bard to add an api so we can start building tools based on their model
    • JuanPosadas 398 days ago
      There already available in GCP under Vertex AI, there are a lot of APIs/models.
    • amf12 398 days ago
      The model is available as an API via GCP.
  • ericpauley 399 days ago
    Has anyone managed to enable this on Google Workspace (business) accounts?
    • tlogan 399 days ago
      Isn’t Duet AI for that? My understanding is that Bard is for consumers while duet is for workspace.
      • ericpauley 399 days ago
        Unlucky. Yet another case where Google continually burns individuals who pay to use their own domain.
  • renegat0x0 399 days ago
    Ahhh yeah, that is what I want. Access to my data for a corporate algorithm that can hallucinate and accuse me of anything.
  • xinayder 399 days ago
    How can I disable Bard from crawling my stuff?
    • 7ewis 399 days ago
      Just tried it out, it pops up and asks you to approve the connection when you go to Bard.
  • andy_ppp 399 days ago
    Makes me wonder, are they using everyone's data to train and personalise Bard? It must be incredibly tempting to use Chat AIs to persuade you to buy products advertised by Google in a way that you will perfectly respond to.
    • drivebycomment 399 days ago
      I was curious about that and the linked blog post answers that:

      > If you choose to use the Workspace extensions, your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model.

  • m3kw9 399 days ago
    I haven’t used a single bard whatever google has released yet. Their marketing dept is really shtty. When I go bard.google.com it does not allow any sort of usage