Skip to main content

AI Agent Dev School Part 3

Form-Filling Frenzy & OKai's Wild Ride

Date: 2024-12-05 YouTube Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU

Timestamps​

00:00:00 - Intro & Housekeeping:

00:08:05 - Building a Form-Filling Agent:

00:16:15 - Deep Dive into Evaluators:

00:27:45 - Code walkthrough of the "Fact Evaluator":

00:36:07 - Building a User Data Evaluator:

00:51:50 - Exploring OKai's Cache Manager:

01:06:01 - Using Claude AI for Code Generation:

01:21:18 - Testing the User Data Flow:

01:30:27 - Adding a Dynamic Provider Based on Completion:

01:37:16 - Q&A with the Audience:

  • Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=5836
  • Python vs. TypeScript agents
  • Pre-evaluation vs. post-evaluation hooks
  • Agent overwhelm with many plugins/evaluators
  • Agentic app use cases beyond chat
  • Running stateless agents
  • Building AIXBT agents

01:47:31 - Outro and Next Steps:

Summary​

This is the third part of the live stream series "AI Agent Dev School" hosted by Shaw from okcashpro, focusing on building AI agents using the OKai framework.

Key takeaways:

  • Updating OKai: Shaw emphasizes staying up-to-date with the rapidly evolving OKai project due to frequent bug fixes and new features. He provides instructions on pulling the latest changes from the main branch on GitHub.
  • Focus on Providers and Evaluators: The stream focuses on building a practical provider-evaluator loop to demonstrate a popular use case for AI agents – filling out a form by extracting user information.
  • Form Builder Example: Shaw walks the audience through building a "form provider" that gathers a user's name, location, and job. This provider utilizes a cache to store already extracted information and instructs the agent to prompt the user for any missing details.
  • Evaluator Role: The evaluator continually checks the cache for the completeness of user data. Once all information is extracted, the evaluator triggers an action to send the collected data to an external API (simulated in the example).
  • Live Coding and AI Assistance: Shaw live codes the example, using tools like "Code2Prompt" and Claude AI to help generate and refine the code. He advocates for writing code in a human-readable manner, utilizing comments to provide context and guidance for both developers and AI assistants.
  • Agentic Applications: Shaw highlights the potential of agentic applications to replicate existing website functionality through conversational interfaces, bringing services directly to users within their preferred social media platforms.
  • Community Engagement: Shaw encourages active participation from the community, suggesting contributions to the project through pull requests and feedback on desired features and patterns for future Dev School sessions.

Overall, this live stream provided a practical tutorial on building a common AI agent use case (form filling) while emphasizing the potential of the OKai framework for developing a wide range of agentic applications.

Hot Takes​

  1. "I'm just going to struggle bus some code today." (00:09:31,664) - Shaw embraces a "struggle bus" approach, showcasing live coding with errors and debugging, reflecting the reality of AI agent development. This contrasts with polished tutorials, highlighting the iterative and messy nature of this new technology.

  2. "I'm actually not gonna put this in a plugin. I'm gonna put this in the agent... just so you can see what happens if you were to, like, make your own agent without using a plugin at all." (00:37:24,793) - Shaw goes against the OKai framework's plugin structure, showing viewers how to bypass it entirely. This bold move emphasizes flexibility, but could spark debate on best practices and potential drawbacks.

  3. "I really don't remember conversations from people very well, like verbatim, but I definitely remember like the gist, the context, the really needy ideas." (00:24:48,180) - Shaw draws a controversial parallel between human memory and the OKai agent's fact extraction. Reducing human interaction to "needy ideas" is provocative, questioning the depth of social understanding AI agents currently possess.

  4. "It's just an LLM. It's just making those numbers up. It could be off. I don't really buy the confidence here." (01:13:56,971) - Shaw dismisses the confidence scores generated by the Large Language Model (LLM), revealing a distrust of these black-box outputs. This skepticism is crucial in a field where relying solely on AI's self-assessment can be misleading.

  5. "Dude, that's a $250 million market cap token. Let's get that shit in Bubba Cat." (01:45:34,809) - Shaw throws out a blunt, market-driven statement regarding the AIXBT token. Bringing finance directly into the technical discussion highlights the intertwined nature of AI development and potential financial incentives, a topic often tiptoed around.