Larry Magid: Generative AI is getting smarter

In my more than 40 years covering the technology sector, I’ve never seen such rapid advancement as with Generative AI (GAI). It’s been about two years since ChatGPT emerged, and in that time, we’ve seen not only new competitors but also significant improvements across GAI systems.

Even in its early stages, ChatGPT was remarkably capable. It was able to answer questions on nearly any subject, assist with essays, articles, poems, songs, images, and even computer code. Now, a host of additional enhancements has made these systems even more powerful.

For instance, until about a year ago, ChatGPT didn’t know about events after September 2021. Now, it stays current. When I asked, “What important things happened in today’s news?” it provided a timely update on key news stories.

Other ChatGPT enhancements include the ability to upload files. For example, you can upload a document and ask it to rewrite it or summarize it for you. ChatGPT recently added the ability to load in documents from Google Drive, including documents. In March 2023, it introduced GPT-4 which has advanced reasoning and creativity.

ChatGPT’s recent integration of search functionality for paid subscribers ($20/month) brings it closer to Google’s capabilities. Meanwhile, Google has integrated its Gemini GAI models into search, providing a summary of search topics alongside traditional results. When I asked Google to “compare Apple Watches and Pixel Watches,” it offered a concise 200-word comparison with links to reviews and other web content.

Many of the GAI services now include two-way voice functionality. You can ask them questions by voice, and they can respond with a computer-generated voice that now sounds like a real human. Both Apple and Google are starting to integrate Generative AI into their smartphone operating systems. Google is going all out with AI in its new Pixel 9 phones. When you press the button to activate Google Assistant, you can ask it questions with your voice and listen to the answer. You can also engage in a conversation. I asked my Pixel 9, “who is Dustin Hoffman,” and while it was telling me about him, I pressed the mic key and interrupted, asking “when was he born,” followed by “who is he married to,” and the natural sounding voice answered my questions.

Microsoft has integrated its Copilot GAI system in Windows 11. Apple is integrating what it calls “Apple Intelligence” into iPhone, iPad and Mac. In other words, Siri is getting a lot smarter.

See also  Bombshell report that warned of Oakland facing bankruptcy is deleted, replaced

One of the things that frustrates me about some GAI systems is not knowing where the information comes from. These systems have access to the entire public internet. If you don’t know the source of the information, you can’t really trust it. I recently burned my finger, and in addition to offering some excellent advice, it also said “If needed, take an over-the-counter pain reliever like ibuprofen or acetaminophen to manage pain and reduce swelling.” I happened to know that acetaminophen (such as Tylenol) does not reduce swelling, so I asked it “are you sure acetaminophen reduces swelling?” and it confessed, “You’re correct to double-check! Acetaminophen does not reduce swelling.”

By the way, if you try this search, it’s possible you’ll get a different answer. ChatGPT and other GAI systems are constantly learning and revising their answers. Even if they haven’t learned something new, you might get a different answer, because each answer is generated on the fly.

Perplexity.AI is one GAI system that does provide links to sources. I asked the same question about my burn, and it, too, incorrectly said that acetaminophen would “help with pain and inflammation.” When I clicked on the link, I was taken to a page from the UK National Health Service that contained accurate information.

Google launched its Bard GAI system in March 2023. Clearly, Google was working on GAI, but it took ChatGPT’s popularity to get the company to give it a major push. In February 2024, Bard became Gemini, which itself has evolved significantly in the past year. There is now a Gemini Advanced included with a Google One subscription that also comes with 2 terabytes of storage and other benefits. Although there remains a free version of Gemini, the advanced version has a more powerful AI model and additional features. For example, though the free model can draw, only the Advanced version can currently draw people, but neither version will draw photorealistic images of identifiable people, children, or other images that go against its guidelines.

See also  Bay Area has 3 of the most expensive US cities for renting a home

Meta, which owns Facebook and Instagram, is also a major player in Generative AI. It has its own standalone Meta.AI website that operates similar to ChatGPT and Gemini, but it is also building AI into its other products. The Meta AI Assistant is now available on WhatsApp, Messenger, Instagram and Ray-Ban Meta smart glasses.

The integration of Meta AI into its smart glasses is particularly interesting. I have a pair and have used their AI feature to translate signs and menus when traveling overseas and to identify landmarks. You can also ask it questions such as “how tall is the Eiffel Tower” and “where is it located” and “what airlines fly from San Francisco to Paris.” But it, too, can make mistakes. I asked, “what airlines fly nonstop from San Jose, California to Los Angeles,” and it included Air France, which of course does not offer that route. I informed it that Air France does not fly that route, and it apologized. The next time I asked, it didn’t include Air France. In fairness, it might have been referring to Air France’s codeshare partner, Delta, which does fly that route.

All of these GAI systems have evolved to the point where I can now pronounce them as very good. But being good has its dangers. Because they are right most of the time, it’s tempting to rely on them. As I’ve pointed out, they can make mistakes, so relying on them can be problematic and potentially dangerous. In addition to not making important decisions based on their information, I would be careful before using them as the sole source in papers, essays or even social media posts. They’re a useful research tool, but before relying on them, make sure you verify the information with a reputable source, and cite that source if you’re reporting it to others.

See also  Roblox to enact child-safety changes giving parents more control

Larry Magid is CEO of ConnectSafely, a nonprofit Internet safety organization that receives financial support from some of the companies mentioned in this article. Contact him at [email protected].

Originally Published:

Note: Thank you for visiting our website! We strive to keep you informed with the latest updates based on expected timelines, although please note that we are not affiliated with any official bodies. Our team is committed to ensuring accuracy and transparency in our reporting, verifying all information before publication. We aim to bring you reliable news, and if you have any questions or concerns about our content, feel free to reach out to us via email. We appreciate your trust and support!

Leave a Reply

Your email address will not be published. Required fields are marked *