April 30, 2024 AI is a misnomer

Today’s newsletter was brought to you by Loom.

Deeper Learning

Hey folks! It’s Tuesday, you know what that means. Time to go deeper. In today’s digest, we’re covering last week’s top AI product, a secret new GPT model, hallucinations, and some recent AI products you might have missed.

Let’s get into it!


Training salespeople with fake customers

You’ve heard “Always be closing,” but the art of making deals relies on another quip: practice, practice, practice. Last week’s top-upvoted AI product is all about it.

PaddleBoat puts salespeople (or founders, etc.) up against fake buyers. You can roleplay cold calls, customize personas, and tailor the faux prospects to give them context on your business. During launch, the makers shouted out some of the tech they use to make PaddleBoat: GPT-4 and Vapi. The former you know, and the latter is a voice AI infrastructure for developers to build, test, and deploy voicebots.

Writing updates about your project shouldn’t get in the way of shipping your project. Now there’s an AI solution to clear the way for your most important work — Loom AI workflows. 

Record a video, then watch as Loom AI automatically creates a share-ready doc from your transcript. File a Jira or Linear ticket, write an SOP, or document your code in just a click. Use the time you save to hit your deadlines.

It’s all part of Loom’s master plan to give you more time to focus on what counts – shipping. 

Save time and ship faster with Loom AI workflows. 


🕵️ GPT-5? That’s what a lot of people seemingly think after a secret AI labeled “gpt-2-chatbot“ was discovered on LMSYS Chatbot Arena, a site used to compare different LLMs. The origins aren’t clear, but on first pass, many users have claimed that the bot’s capabilities might exceed GPT-4, prompting GPT-5 rumors.

🍎Apple AI: If you’ve been wondering when Apple will finally have its say on the AI space, it could be pretty soon. According to reports, Apple is poaching dozens of Google staff, in particular those with AI experience.


Talk to your data in plain English

Being “data-driven” in business is crucial, but querying data is often difficult for those who don’t know advanced BI tools or SQL.  

Outerbase launched a solution last year to help founders and their teams easily navigate their data with AI, largely through an AI chat assistant called “EZQL” which lets you query your data in plain English. After going through the 2023 YC winter batch, the team launched big updates. They redesiged the app from the ground up to be responsive (i.e. easily usable on mobile devices), added support for database connections including MySQL, Postgres, and MongoDB, and added a data studio with an optimized GPT-4 client, a SQL co-pilot, and AI-powered data visualization tools. Wow, go team!


What makes AI models hallucinate?

Question submitted by @josej30.

Did you know if you ask “Who is King Renoit?” on OpenAI’s playground, it will tell you that King Renoit reigned from 1514 to 1544. The only problem is that King Renoit is totally made up.

So why would AI generate a fake response? Before I get to that, this is one thing I find interesting about humans and AI. We tend to personify AI and its chatbots when we don’t have much information about how they work. And of course we would. It’s called “artificial intelligence” after all. But the latter half of that phrase is kind of a misnomer. In reality, while neural networks are built to mimic the way neurons in your brain work together, they don’t reason quite like we do. They are “essentially a complex prediction machine.”

Here’s a fun way to get a glimpse of how those predictions play out — a project called “Look into the machine’s mind.” A team of data scientists gave the ChatGPT API texts like “Intelligence is…” and charted the many varied responses the model would come up with. “Given a text, a Large Language Model assigns a probability for the word (token) to come,” the team explains, “and it just repeats this process until a completion is…well, complete.”

So back to what happens when AI hallucinates. The model is simply trying to predict the best combination of tokens/words it can (like that visualization above)— the combination it thinks will satisfy you even if, say, it was not trained on enough data. I.e. Even if the model doesn’t have the right answer, it might give you an answer.

Okay, but how do the models predict each of their words — or as @josej30 put it, “What’s the math behind it?” And, even if LLMs don’t “think” the same way we do now or have all the information, context, and nuance we have, can we get them there?

We’ll dive into those questions more in upcoming Deeper Learning editions. At a high level, the explanation for how LLMs make their predictions starts with parameters, which you can think of as settings that guide the LLM’s learning. But things get pretty complex from there as you start to learn about how parameters and “embeddings” predict tokens.

As for tackling hallucinations and, let’s call it, thinking better, there are several methods being implemented and refined now, including a recently popular technique called RAG or Retrieval-Augmented Generation. RAG is when an LLM is optimized by using knowledge fetched from external resources. Come back next week to go deeper!

Got a question for us? Add it here.


For Makers

  • PlayAI is a conversational assistant designed to build human-like voice AI agents.

  • Intellecta is an AI tool trained on your data to provide customer support. 

For Work

  • Meeting Muse helps your team identify ineffective meetings so you can make them more meaningful.

  • Summie is an AI meeting assistant that generates summaries, takeaways, and more.

For Developers

  • Langfuse 2.0 is an open-source AI platform for perfecting your LLM before releasing it. 

  • Langtail helps development teams ship AI apps faster and with fewer bugs. 

Here via forward? Subscribe here.

Have feedback?

Did you enjoy today's newsletter?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.