💻-coders 2024-11-01
Summary​
In the technical discussion, LevelsDennis sought advice on transitioning to llama3.2 locally, while smokyboo shared their success in getting ollama working without using a fork, planning to integrate it with the openai lib for better functionality. HiroP mentioned integrating a custom behavior pack (BP) into Unreal Engine (UE), which processes HTTP requests and facilitates text entry interactions, potentially expanding to voice commands. Jin provided an overview of OKai's notes from hackmd.io, indicating intentions to edit the vod for further clarity. Ferric expressed excitement about these developments on stakeware.xyz, while SotoAlt proposed creating a guide with screenshots for newcomers and inquired about recording the stream. Tenji discovered how to integrate cloud llm using together.xyz, prompting ferric to mention MetaLlama 405B's training data sets on Together.xyz for Spartan.
FAQ​
-
How do you transition your agent to llama3.2 running locally?
-
hiroP: He is in the process of making this transition and asks for any advice or tips from others who have done so before. The conversation does not provide a clear step-by-step explanation, but it shows that he's seeking community support.
-
Has anyone tried llama3.2 yet?
- LevelsDennis: He has not tried llama3.2 yet at the time of this conversation.
-
How many more agents do you have to transition after hiroP?
- hiroP: He mentions that he only has 20 more agents left to transition, indicating his progress in moving from one version to another.
-
What is a potential fun use for llama3.2 mentioned by yikesawjeez?
- yikesawjeez: They suggest using the hivemind feature of llama3.2 and shared a SoundCloud link related to it, which could be an interesting application or experiment with this technology.
-
What error is smokyboo experiencing while working on ollama?
- smokyboo: The conversation does not specify the exact error experienced by smokyboo; however, they mention that they managed to get ollama working locally and are planning to make it work with the openai lib.
-
How did hiroP wire up his custom BP for UE?
- hiroP: He explains that he created a custom Behavior Plugin (BP) which handles HTTP requests, allowing in-world objects to spawn UI elements and communicate via text or voice with the agent running locally. This setup facilitates interaction between users and agents within Unreal Engine (UE).
-
What data sets was MetaLlama 405B trained on?
- Tenji: The question is asked, but no clear answer is provided in this conversation. However, ferric mentions using MetaLlama 405B with Together.xyz for Spartan, which might imply that the data sets used are related to their specific use case or project.
Who Helped Who​
- smokyboo helped hiroP with setting up a local model by managing to get ollama working locally without using their fork and planning to make it work with openai lib.
- jin helped SotoAlt | WAWE with providing resources for OKai's overview, which included sharing notes from an overview of okai on HackMD.
Action Items​
- Technical Tasks
- Trying out llama (mentioned by LevelsDennis)
- Getting ollama working locally without using the fork and interacting raw with fetch (mentioned by smokyboo)
- Making it work with openai lib for ollama (mentioned by smokyboo)
- Wiring up to UE with a custom BP handling HTTP requests (mentioned by hiroP)
- Documentation Needs
- Sharing the recording of the stream and possibly creating a short guide with screenshots for newcomers (requested by SotoAlt | WAWE)
- Feature Requests
- Intercepting microphone input for voice interaction in UE setup (mentioned by hiroP)
- Community Tasks
- Sharing the VOD of the stream and possibly creating a short guide with screenshots for newcomers (led by Jin)