This showed up in my Gmail inbox today:
It turns out this puts me on a waitlist …
is it google’s chatGPT
I’ve “chatted” with Bard twice. I really don’t understand the appeal, or maybe Google is just really bad at this?
I asked it to give me opinions or about “sensitive” topics and it refused to answer, as expected. But then I asked it fact-based but somewhat niche questions (specifically about 2023 model year vehicles for sale in the USA of which the questions are easy to answer from the first 10 google results from the same question and where the official documentation from the manufacturers calls out the answers) and it often gave me completely incorrect answers. After telling Bard that its answer was wrong and providing feedback, still 2 of the 3 draft answers it gives days later for the same questions are still completely incorrect.
To me, it feels like this “AI” can be very powerful, but only if you’re already an expert in an area. If you’re new to a topic and ask Bard a question, you won’t have any idea if it’s correct or not. That’s concerning, especially if it’s going to often be wrong.
My few tests on chat.openai have been mixed as well. For some things, it helps condense a bunch of information down to a really useful procedure. However, you never know when it is wrong as it seems very confident. For instance, recently I was trying to get help with a Caddy2 config issue and it kept giving me syntax for Caddy1 config even though I asked for v2.
This sounds a lot like the advice from: CoPilot Review: My Thoughts After 6 Months