AI Agent Dev School Part 3
Form-Filling Frenzy & OKai's Wild Ride
Date: 2024-12-05 YouTube Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU
Timestamps​
00:00:00 - Intro & Housekeeping:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=0
- Recap of previous sessions (Typescript, plugins, actions)
- Importance of staying on the latest OKai branch
- How to pull latest changes and stash local modifications
00:08:05 - Building a Form-Filling Agent:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=485
- Introduction to Providers & Evaluators
- Practical use case: Extracting user data (name, location, job)
- Steps for a provider-evaluator loop to gather info and trigger actions
00:16:15 - Deep Dive into Evaluators:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=975
- Understanding "Evaluator" in OKai's context
- When they run, their role in agent's self-reflection
00:27:45 - Code walkthrough of the "Fact Evaluator":
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=1675
- Code walkthrough of the "Fact Evaluator"
00:36:07 - Building a User Data Evaluator:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=2167
- Starting from scratch, creating a basic evaluator
- Registering the evaluator directly in the agent (no plugin)
- Logging evaluator activity and inspecting context
00:51:50 - Exploring OKai's Cache Manager:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=3110
- Shaw uses Code2Prompt to analyze cache manager code
- Applying cache manager principles to user data storage
01:06:01 - Using Claude AI for Code Generation:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=3961
- Pasting code into Claude and giving instructions
- Iterative process: Refining code and providing feedback to Claude
01:21:18 - Testing the User Data Flow:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=4878
- Running the agent and interacting with it
- Observing evaluator logs and context injections
- Troubleshooting and iterating on code based on agent behavior
01:30:27 - Adding a Dynamic Provider Based on Completion:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=5427
- Creating a new provider that only triggers after user data is collected
- Example: Providing a secret code or access link as a reward
01:37:16 - Q&A with the Audience:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=5836
- Python vs. TypeScript agents
- Pre-evaluation vs. post-evaluation hooks
- Agent overwhelm with many plugins/evaluators
- Agentic app use cases beyond chat
- Running stateless agents
- Building AIXBT agents
01:47:31 - Outro and Next Steps:
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=6451
- Recap of key learnings and the potential of provider-evaluator loops
- Call to action: Share project ideas and feedback for future sessions
Summary​
This is the third part of the live stream series "AI Agent Dev School" hosted by Shaw from okcashpro, focusing on building AI agents using the OKai framework.
Key takeaways:
- Updating OKai: Shaw emphasizes staying up-to-date with the rapidly evolving OKai project due to frequent bug fixes and new features. He provides instructions on pulling the latest changes from the main branch on GitHub.
- Focus on Providers and Evaluators: The stream focuses on building a practical provider-evaluator loop to demonstrate a popular use case for AI agents – filling out a form by extracting user information.
- Form Builder Example: Shaw walks the audience through building a "form provider" that gathers a user's name, location, and job. This provider utilizes a cache to store already extracted information and instructs the agent to prompt the user for any missing details.
- Evaluator Role: The evaluator continually checks the cache for the completeness of user data. Once all information is extracted, the evaluator triggers an action to send the collected data to an external API (simulated in the example).
- Live Coding and AI Assistance: Shaw live codes the example, using tools like "Code2Prompt" and Claude AI to help generate and refine the code. He advocates for writing code in a human-readable manner, utilizing comments to provide context and guidance for both developers and AI assistants.
- Agentic Applications: Shaw highlights the potential of agentic applications to replicate existing website functionality through conversational interfaces, bringing services directly to users within their preferred social media platforms.
- Community Engagement: Shaw encourages active participation from the community, suggesting contributions to the project through pull requests and feedback on desired features and patterns for future Dev School sessions.
Overall, this live stream provided a practical tutorial on building a common AI agent use case (form filling) while emphasizing the potential of the OKai framework for developing a wide range of agentic applications.
Hot Takes​
-
"I'm just going to struggle bus some code today." (00:09:31,664) - Shaw embraces a "struggle bus" approach, showcasing live coding with errors and debugging, reflecting the reality of AI agent development. This contrasts with polished tutorials, highlighting the iterative and messy nature of this new technology.
-
"I'm actually not gonna put this in a plugin. I'm gonna put this in the agent... just so you can see what happens if you were to, like, make your own agent without using a plugin at all." (00:37:24,793) - Shaw goes against the OKai framework's plugin structure, showing viewers how to bypass it entirely. This bold move emphasizes flexibility, but could spark debate on best practices and potential drawbacks.
-
"I really don't remember conversations from people very well, like verbatim, but I definitely remember like the gist, the context, the really needy ideas." (00:24:48,180) - Shaw draws a controversial parallel between human memory and the OKai agent's fact extraction. Reducing human interaction to "needy ideas" is provocative, questioning the depth of social understanding AI agents currently possess.
-
"It's just an LLM. It's just making those numbers up. It could be off. I don't really buy the confidence here." (01:13:56,971) - Shaw dismisses the confidence scores generated by the Large Language Model (LLM), revealing a distrust of these black-box outputs. This skepticism is crucial in a field where relying solely on AI's self-assessment can be misleading.
-
"Dude, that's a $250 million market cap token. Let's get that shit in Bubba Cat." (01:45:34,809) - Shaw throws out a blunt, market-driven statement regarding the AIXBT token. Bringing finance directly into the technical discussion highlights the intertwined nature of AI development and potential financial incentives, a topic often tiptoed around.