Hi all!
Welcome to the inaugural issue of the AI Agentplex! The world’s dedicated newsletter for AI Agents covering key developments in AI research, industry and start ups as well as updates on the AI Agents Global Challenge (www.aiagentschallenge.com)
AI Agents Global Challenge updates
We officially launched the Challenge on 11 March and have already received 100+ sign ups to the application. Remember, you can keep refining your applications up to the submission deadline of 1 September so we encourage you to apply early and get access to the team through the application window.
The challenge seeks to pay out US$ 1million (as grants) for solutions utilizing AI Agents in any field. It is open to AI enthusiasts, students, professionals, startups, and any size organisation. Whether you’re a beginner or an expert, you are welcome to participate.
Over the coming weeks we will be hosting a series of events including webinars, hackathons and other events to build up the community. In the meantime, please get in touch via our Discord to join our community or email us for a 1:1 discussion
Industry headlines and talks
AI Agentic Workflows: Andrew Ng, founder of DeepLearning.AI and AI Fund, speaks at Sequoia Capital's AI Ascent about what's next for AI agentic workflows and their potential to significantly propel AI advancements
LLM Agent Operating System: Researchers published a paper on a custom operating system (AIOS), which is designed to optimize resource allocation, enabling concurrent execution of agents integrated with LLMs
Octopus v2: On-device language model for super agent: Development of a specialized LLM model for AI Agents that presents a new method empowering an on-device model with 2 billion parameters to surpass the performance of GPT-4 in both accuracy and latency
Nvidia announces AI-powered healthcare agents: Together with a wide suite of microservices, Nvidia has also partnered with Hippocratic AI to develop AI "agents" that outperform human nurses on video calls, cost significantly less per hour, and are designed to form a human connection with patients through "super-low latency conversational reactions”.
Foundational Secures $8M venture funding to bring AI Agents into Data Engineering: Deploying agents to help organizations find, fix and prevent data issues of any type, before code deployment, making it easier to build and maintain code that’s driving data
Friends of the Ecosystem
As we grow the ecosystem for AI Agents, we will regularly introduce tools and participants focusing on AI Agents, including toolkits for building and running AI Agents. This week we look at the importance of assessing AI Agents Performance
(Bought to you by Artificial Analysis – www.artificialanalysis.ai)
Agent use-cases for LLMs push the limits of what is possible with current models. Selecting LLMs and hosting providers is critical to success for developing agents. The spread in inference pricing between large and small models available in the market today is more than 300x, and the spread in inference speed (output tokens/second) is more than 15x. This means that carefully understanding quality and performance requirements, and choosing trade-offs to best meet these requirements is critical for success. Where the capabilities of the largest LLMs are core to agent capabilities, models such as GPT-4 Turbo and Claude 3 Opus continue to be required but much smaller models such as Mixtral 8x7B and Claude 3 Haiku enable dramatically faster and cheaper output when they can be used.
Congratulations on the launch! AI Agents entering this Challenge are going to change the world! 🎉