With the rise of large language models, the demand for personalization in software is growing like never before. Plastic Labs’ newly launched Honcho platform adopts a “plug-and-play” approach designed to save developers from reinventing the wheel when it comes to building deep user profiles.
On April 11 (Beijing time), AI startup Plastic Labs announced that it has completed a $5.35 million Pre-Seed funding round. The round was led by Variant, White Star Capital, and Betaworks, with participation from Mozilla Ventures, Seed Club Ventures, Greycroft, and Differential Ventures. Angel investors included Scott Moore, NiMA Asghari, and Thomas Howell. At the same time, its personalized AI identity platform, Honcho, has officially opened for early access.
Since the project is still in its early stages, the broader crypto community knows very little about Plastic Labs. Alongside Plastic’s announcement on X regarding its funding and product launch, Daniel Barabander—General Partner and Advisor at lead investor Variant—shared an in-depth analysis of the project and its Honcho platform. The original content is as follows:
With the rise of large language model (LLM) applications, the demand for personalization in software has grown unprecedentedly. These applications rely on natural language, which changes depending on the person you’re speaking to—much like how you’d explain a math concept differently to your grandparents than to your parents or children. You instinctively tailor your communication to your audience, and LLM applications must similarly “understand” who they’re interacting with to deliver more effective and personalized experiences. Whether it’s a therapeutic assistant, a legal advisor, or a shopping companion, these applications need a true understanding of the user to deliver real value.
However, despite the critical importance of personalization, there are currently no ready-made solutions that LLM applications can easily integrate. Developers often have to cobble together fragmented systems to store user data (usually in the form of conversation logs) and retrieve it when needed. As a result, every team ends up reinventing the wheel by building their own user state management infrastructure. Worse yet, techniques like storing user interactions in a vector database and using retrieval-augmented generation (RAG) can only recall past conversations—they can’t capture deeper aspects of the user such as interests, communication preferences, or sensitivity to tone.
Plastic Labs introduces Honcho, a plug-and-play platform that allows developers to easily implement personalization in any LLM application. Instead of building user modeling from scratch, developers can simply integrate Honcho to instantly access rich and persistent user profiles. These profiles go beyond what traditional methods can offer, thanks to the team’s use of cutting-edge techniques from cognitive science. Moreover, they support natural language queries, enabling LLMs to dynamically adapt their behavior based on a user’s profile.
By abstracting away the complexity of user state management, Honcho opens the door to a new level of hyper-personalized experiences for LLM applications. But its significance goes far beyond that: the rich and abstract user profiles generated by Honcho also pave the way for the long-elusive “shared user data layer.”
Historically, attempts to build a shared user data layer have failed for two main reasons:
Lack of interoperability: Traditional user data is often tightly coupled to specific application contexts, making it difficult to migrate across apps. For instance, a social platform like X might model users based on who they follow, but that data offers little value for one’s professional network on LinkedIn. Honcho, on the other hand, captures higher-order and more universal user traits that can seamlessly serve any LLM application. For example, if a tutoring app discovers that a user learns best through analogies, a therapy assistant could leverage that same insight to communicate more effectively—even though the two use cases are entirely different.
Lack of immediate value: Previous shared layers struggled to attract early application adopters because they didn’t provide tangible benefits upfront, even though these early users were key to generating valuable data. Honcho takes a different approach: it first solves the “primary problem” of user state management for individual applications. As more apps join, the resulting network effect naturally addresses the “secondary problem.” New applications will not only integrate for personalization but will also benefit from existing shared user profiles from the outset, completely bypassing the cold-start problem.
Currently, hundreds of applications are on the waitlist for Honcho’s closed beta, spanning use cases like addiction recovery coaching, educational companions, reading assistants, and e-commerce tools. The team’s strategy is to first focus on solving the core challenge of user state management for apps, and then gradually roll out the shared data layer to participating apps. This layer will be supported by encrypted incentives: early integrators will receive ownership shares in the data layer and benefit from its growth. Additionally, blockchain mechanisms will ensure the system remains decentralized and trustworthy, alleviating concerns about centralized entities extracting value or building competing products.
Variant believes the Plastic Labs team is well-positioned to tackle the challenge of user modeling in LLM-driven software. The team experienced this pain point firsthand while building Bloom, a personalized chat-based tutoring app, and realized the app couldn’t truly understand students or their learning styles. Honcho was born from this insight—and it’s now solving a problem that every LLM application developer is bound to face.
This article is republished from [PANews]. Copyright belongs to the original author [Zen]. If you have concerns about the republication, please contact the Gate Learn team, who will address it through the proper channels.
Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute investment advice.
Other language versions of this article have been translated by the Gate Learn team. Do not reproduce, distribute, or plagiarize these translated versions without proper attribution to Gate.io.
Compartilhar
Conteúdo
With the rise of large language models, the demand for personalization in software is growing like never before. Plastic Labs’ newly launched Honcho platform adopts a “plug-and-play” approach designed to save developers from reinventing the wheel when it comes to building deep user profiles.
On April 11 (Beijing time), AI startup Plastic Labs announced that it has completed a $5.35 million Pre-Seed funding round. The round was led by Variant, White Star Capital, and Betaworks, with participation from Mozilla Ventures, Seed Club Ventures, Greycroft, and Differential Ventures. Angel investors included Scott Moore, NiMA Asghari, and Thomas Howell. At the same time, its personalized AI identity platform, Honcho, has officially opened for early access.
Since the project is still in its early stages, the broader crypto community knows very little about Plastic Labs. Alongside Plastic’s announcement on X regarding its funding and product launch, Daniel Barabander—General Partner and Advisor at lead investor Variant—shared an in-depth analysis of the project and its Honcho platform. The original content is as follows:
With the rise of large language model (LLM) applications, the demand for personalization in software has grown unprecedentedly. These applications rely on natural language, which changes depending on the person you’re speaking to—much like how you’d explain a math concept differently to your grandparents than to your parents or children. You instinctively tailor your communication to your audience, and LLM applications must similarly “understand” who they’re interacting with to deliver more effective and personalized experiences. Whether it’s a therapeutic assistant, a legal advisor, or a shopping companion, these applications need a true understanding of the user to deliver real value.
However, despite the critical importance of personalization, there are currently no ready-made solutions that LLM applications can easily integrate. Developers often have to cobble together fragmented systems to store user data (usually in the form of conversation logs) and retrieve it when needed. As a result, every team ends up reinventing the wheel by building their own user state management infrastructure. Worse yet, techniques like storing user interactions in a vector database and using retrieval-augmented generation (RAG) can only recall past conversations—they can’t capture deeper aspects of the user such as interests, communication preferences, or sensitivity to tone.
Plastic Labs introduces Honcho, a plug-and-play platform that allows developers to easily implement personalization in any LLM application. Instead of building user modeling from scratch, developers can simply integrate Honcho to instantly access rich and persistent user profiles. These profiles go beyond what traditional methods can offer, thanks to the team’s use of cutting-edge techniques from cognitive science. Moreover, they support natural language queries, enabling LLMs to dynamically adapt their behavior based on a user’s profile.
By abstracting away the complexity of user state management, Honcho opens the door to a new level of hyper-personalized experiences for LLM applications. But its significance goes far beyond that: the rich and abstract user profiles generated by Honcho also pave the way for the long-elusive “shared user data layer.”
Historically, attempts to build a shared user data layer have failed for two main reasons:
Lack of interoperability: Traditional user data is often tightly coupled to specific application contexts, making it difficult to migrate across apps. For instance, a social platform like X might model users based on who they follow, but that data offers little value for one’s professional network on LinkedIn. Honcho, on the other hand, captures higher-order and more universal user traits that can seamlessly serve any LLM application. For example, if a tutoring app discovers that a user learns best through analogies, a therapy assistant could leverage that same insight to communicate more effectively—even though the two use cases are entirely different.
Lack of immediate value: Previous shared layers struggled to attract early application adopters because they didn’t provide tangible benefits upfront, even though these early users were key to generating valuable data. Honcho takes a different approach: it first solves the “primary problem” of user state management for individual applications. As more apps join, the resulting network effect naturally addresses the “secondary problem.” New applications will not only integrate for personalization but will also benefit from existing shared user profiles from the outset, completely bypassing the cold-start problem.
Currently, hundreds of applications are on the waitlist for Honcho’s closed beta, spanning use cases like addiction recovery coaching, educational companions, reading assistants, and e-commerce tools. The team’s strategy is to first focus on solving the core challenge of user state management for apps, and then gradually roll out the shared data layer to participating apps. This layer will be supported by encrypted incentives: early integrators will receive ownership shares in the data layer and benefit from its growth. Additionally, blockchain mechanisms will ensure the system remains decentralized and trustworthy, alleviating concerns about centralized entities extracting value or building competing products.
Variant believes the Plastic Labs team is well-positioned to tackle the challenge of user modeling in LLM-driven software. The team experienced this pain point firsthand while building Bloom, a personalized chat-based tutoring app, and realized the app couldn’t truly understand students or their learning styles. Honcho was born from this insight—and it’s now solving a problem that every LLM application developer is bound to face.
This article is republished from [PANews]. Copyright belongs to the original author [Zen]. If you have concerns about the republication, please contact the Gate Learn team, who will address it through the proper channels.
Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute investment advice.
Other language versions of this article have been translated by the Gate Learn team. Do not reproduce, distribute, or plagiarize these translated versions without proper attribution to Gate.io.